00:00:00.001 Started by upstream project "autotest-nightly" build number 3913 00:00:00.001 originally caused by: 00:00:00.002 Started by upstream project "nightly-trigger" build number 3289 00:00:00.002 originally caused by: 00:00:00.002 Started by timer 00:00:00.002 Started by timer 00:00:00.069 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/iscsi-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.071 The recommended git tool is: git 00:00:00.071 using credential 00000000-0000-0000-0000-000000000002 00:00:00.083 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/iscsi-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.107 Fetching changes from the remote Git repository 00:00:00.109 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.144 Using shallow fetch with depth 1 00:00:00.144 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.144 > git --version # timeout=10 00:00:00.178 > git --version # 'git version 2.39.2' 00:00:00.178 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.211 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.211 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:04.534 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.544 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.555 Checking out Revision 6b67f5fa1cb27c9c410cb5dac6df31d28ba79422 (FETCH_HEAD) 00:00:04.555 > git config core.sparsecheckout # timeout=10 00:00:04.565 > git read-tree -mu HEAD # timeout=10 00:00:04.583 > git checkout -f 6b67f5fa1cb27c9c410cb5dac6df31d28ba79422 # timeout=5 00:00:04.610 Commit message: "doc: add chapter about running CI Vagrant images on dev-systems" 00:00:04.610 > git rev-list --no-walk 6b67f5fa1cb27c9c410cb5dac6df31d28ba79422 # timeout=10 00:00:04.717 [Pipeline] Start of Pipeline 00:00:04.729 [Pipeline] library 00:00:04.730 Loading library shm_lib@master 00:00:04.730 Library shm_lib@master is cached. Copying from home. 00:00:04.743 [Pipeline] node 00:00:04.756 Running on VM-host-SM17 in /var/jenkins/workspace/iscsi-vg-autotest 00:00:04.758 [Pipeline] { 00:00:04.766 [Pipeline] catchError 00:00:04.767 [Pipeline] { 00:00:04.779 [Pipeline] wrap 00:00:04.789 [Pipeline] { 00:00:04.798 [Pipeline] stage 00:00:04.800 [Pipeline] { (Prologue) 00:00:04.820 [Pipeline] echo 00:00:04.821 Node: VM-host-SM17 00:00:04.827 [Pipeline] cleanWs 00:00:05.890 [WS-CLEANUP] Deleting project workspace... 00:00:05.890 [WS-CLEANUP] Deferred wipeout is used... 00:00:05.895 [WS-CLEANUP] done 00:00:06.098 [Pipeline] setCustomBuildProperty 00:00:06.168 [Pipeline] httpRequest 00:00:06.201 [Pipeline] echo 00:00:06.202 Sorcerer 10.211.164.101 is alive 00:00:06.207 [Pipeline] httpRequest 00:00:06.210 HttpMethod: GET 00:00:06.211 URL: http://10.211.164.101/packages/jbp_6b67f5fa1cb27c9c410cb5dac6df31d28ba79422.tar.gz 00:00:06.211 Sending request to url: http://10.211.164.101/packages/jbp_6b67f5fa1cb27c9c410cb5dac6df31d28ba79422.tar.gz 00:00:06.213 Response Code: HTTP/1.1 200 OK 00:00:06.213 Success: Status code 200 is in the accepted range: 200,404 00:00:06.213 Saving response body to /var/jenkins/workspace/iscsi-vg-autotest/jbp_6b67f5fa1cb27c9c410cb5dac6df31d28ba79422.tar.gz 00:00:07.127 [Pipeline] sh 00:00:07.405 + tar --no-same-owner -xf jbp_6b67f5fa1cb27c9c410cb5dac6df31d28ba79422.tar.gz 00:00:07.420 [Pipeline] httpRequest 00:00:07.435 [Pipeline] echo 00:00:07.436 Sorcerer 10.211.164.101 is alive 00:00:07.442 [Pipeline] httpRequest 00:00:07.446 HttpMethod: GET 00:00:07.447 URL: http://10.211.164.101/packages/spdk_f7b31b2b9679b48e9e13514a6b668058bb45fd56.tar.gz 00:00:07.447 Sending request to url: http://10.211.164.101/packages/spdk_f7b31b2b9679b48e9e13514a6b668058bb45fd56.tar.gz 00:00:07.449 Response Code: HTTP/1.1 200 OK 00:00:07.449 Success: Status code 200 is in the accepted range: 200,404 00:00:07.450 Saving response body to /var/jenkins/workspace/iscsi-vg-autotest/spdk_f7b31b2b9679b48e9e13514a6b668058bb45fd56.tar.gz 00:00:24.217 [Pipeline] sh 00:00:24.503 + tar --no-same-owner -xf spdk_f7b31b2b9679b48e9e13514a6b668058bb45fd56.tar.gz 00:00:27.051 [Pipeline] sh 00:00:27.334 + git -C spdk log --oneline -n5 00:00:27.334 f7b31b2b9 log: declare g_deprecation_epoch static 00:00:27.334 21d0c3ad6 trace: declare g_user_thread_index_start, g_ut_array and g_ut_array_mutex static 00:00:27.334 3731556bd lvol: declare g_lvol_if static 00:00:27.334 f8404a2d4 nvme: declare g_current_transport_index and g_spdk_transports static 00:00:27.334 34efb6523 dma: declare g_dma_mutex and g_dma_memory_domains static 00:00:27.352 [Pipeline] writeFile 00:00:27.367 [Pipeline] sh 00:00:27.647 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:00:27.659 [Pipeline] sh 00:00:27.941 + cat autorun-spdk.conf 00:00:27.941 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:27.941 SPDK_TEST_ISCSI_INITIATOR=1 00:00:27.941 SPDK_TEST_ISCSI=1 00:00:27.941 SPDK_TEST_RBD=1 00:00:27.941 SPDK_RUN_ASAN=1 00:00:27.941 SPDK_RUN_UBSAN=1 00:00:27.941 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:27.948 RUN_NIGHTLY=1 00:00:27.950 [Pipeline] } 00:00:27.966 [Pipeline] // stage 00:00:27.981 [Pipeline] stage 00:00:27.984 [Pipeline] { (Run VM) 00:00:27.999 [Pipeline] sh 00:00:28.282 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:00:28.282 + echo 'Start stage prepare_nvme.sh' 00:00:28.282 Start stage prepare_nvme.sh 00:00:28.282 + [[ -n 1 ]] 00:00:28.282 + disk_prefix=ex1 00:00:28.282 + [[ -n /var/jenkins/workspace/iscsi-vg-autotest ]] 00:00:28.282 + [[ -e /var/jenkins/workspace/iscsi-vg-autotest/autorun-spdk.conf ]] 00:00:28.282 + source /var/jenkins/workspace/iscsi-vg-autotest/autorun-spdk.conf 00:00:28.282 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:28.282 ++ SPDK_TEST_ISCSI_INITIATOR=1 00:00:28.282 ++ SPDK_TEST_ISCSI=1 00:00:28.282 ++ SPDK_TEST_RBD=1 00:00:28.282 ++ SPDK_RUN_ASAN=1 00:00:28.282 ++ SPDK_RUN_UBSAN=1 00:00:28.282 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:28.282 ++ RUN_NIGHTLY=1 00:00:28.282 + cd /var/jenkins/workspace/iscsi-vg-autotest 00:00:28.282 + nvme_files=() 00:00:28.282 + declare -A nvme_files 00:00:28.282 + backend_dir=/var/lib/libvirt/images/backends 00:00:28.282 + nvme_files['nvme.img']=5G 00:00:28.282 + nvme_files['nvme-cmb.img']=5G 00:00:28.282 + nvme_files['nvme-multi0.img']=4G 00:00:28.282 + nvme_files['nvme-multi1.img']=4G 00:00:28.282 + nvme_files['nvme-multi2.img']=4G 00:00:28.282 + nvme_files['nvme-openstack.img']=8G 00:00:28.282 + nvme_files['nvme-zns.img']=5G 00:00:28.282 + (( SPDK_TEST_NVME_PMR == 1 )) 00:00:28.282 + (( SPDK_TEST_FTL == 1 )) 00:00:28.282 + (( SPDK_TEST_NVME_FDP == 1 )) 00:00:28.282 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:00:28.282 + for nvme in "${!nvme_files[@]}" 00:00:28.282 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi2.img -s 4G 00:00:28.282 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:00:28.282 + for nvme in "${!nvme_files[@]}" 00:00:28.282 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-cmb.img -s 5G 00:00:28.282 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:00:28.282 + for nvme in "${!nvme_files[@]}" 00:00:28.282 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-openstack.img -s 8G 00:00:28.282 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:00:28.282 + for nvme in "${!nvme_files[@]}" 00:00:28.282 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-zns.img -s 5G 00:00:28.282 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:00:28.282 + for nvme in "${!nvme_files[@]}" 00:00:28.282 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi1.img -s 4G 00:00:28.282 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:00:28.282 + for nvme in "${!nvme_files[@]}" 00:00:28.282 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi0.img -s 4G 00:00:28.282 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:00:28.282 + for nvme in "${!nvme_files[@]}" 00:00:28.282 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme.img -s 5G 00:00:28.541 Formatting '/var/lib/libvirt/images/backends/ex1-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:00:28.541 ++ sudo grep -rl ex1-nvme.img /etc/libvirt/qemu 00:00:28.541 + echo 'End stage prepare_nvme.sh' 00:00:28.542 End stage prepare_nvme.sh 00:00:28.554 [Pipeline] sh 00:00:28.836 + DISTRO=fedora38 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:00:28.836 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex1-nvme.img -b /var/lib/libvirt/images/backends/ex1-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex1-nvme-multi1.img:/var/lib/libvirt/images/backends/ex1-nvme-multi2.img -H -a -v -f fedora38 00:00:28.836 00:00:28.836 DIR=/var/jenkins/workspace/iscsi-vg-autotest/spdk/scripts/vagrant 00:00:28.836 SPDK_DIR=/var/jenkins/workspace/iscsi-vg-autotest/spdk 00:00:28.836 VAGRANT_TARGET=/var/jenkins/workspace/iscsi-vg-autotest 00:00:28.836 HELP=0 00:00:28.836 DRY_RUN=0 00:00:28.836 NVME_FILE=/var/lib/libvirt/images/backends/ex1-nvme.img,/var/lib/libvirt/images/backends/ex1-nvme-multi0.img, 00:00:28.836 NVME_DISKS_TYPE=nvme,nvme, 00:00:28.836 NVME_AUTO_CREATE=0 00:00:28.836 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex1-nvme-multi1.img:/var/lib/libvirt/images/backends/ex1-nvme-multi2.img, 00:00:28.836 NVME_CMB=,, 00:00:28.836 NVME_PMR=,, 00:00:28.836 NVME_ZNS=,, 00:00:28.836 NVME_MS=,, 00:00:28.836 NVME_FDP=,, 00:00:28.836 SPDK_VAGRANT_DISTRO=fedora38 00:00:28.836 SPDK_VAGRANT_VMCPU=10 00:00:28.836 SPDK_VAGRANT_VMRAM=12288 00:00:28.836 SPDK_VAGRANT_PROVIDER=libvirt 00:00:28.836 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:00:28.836 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:00:28.836 SPDK_OPENSTACK_NETWORK=0 00:00:28.836 VAGRANT_PACKAGE_BOX=0 00:00:28.836 VAGRANTFILE=/var/jenkins/workspace/iscsi-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:00:28.836 FORCE_DISTRO=true 00:00:28.836 VAGRANT_BOX_VERSION= 00:00:28.836 EXTRA_VAGRANTFILES= 00:00:28.836 NIC_MODEL=e1000 00:00:28.836 00:00:28.836 mkdir: created directory '/var/jenkins/workspace/iscsi-vg-autotest/fedora38-libvirt' 00:00:28.836 /var/jenkins/workspace/iscsi-vg-autotest/fedora38-libvirt /var/jenkins/workspace/iscsi-vg-autotest 00:00:31.381 Bringing machine 'default' up with 'libvirt' provider... 00:00:31.655 ==> default: Creating image (snapshot of base box volume). 00:00:31.915 ==> default: Creating domain with the following settings... 00:00:31.915 ==> default: -- Name: fedora38-38-1.6-1716830599-074-updated-1705279005_default_1721699740_018197b132e4794273bb 00:00:31.915 ==> default: -- Domain type: kvm 00:00:31.915 ==> default: -- Cpus: 10 00:00:31.915 ==> default: -- Feature: acpi 00:00:31.915 ==> default: -- Feature: apic 00:00:31.915 ==> default: -- Feature: pae 00:00:31.915 ==> default: -- Memory: 12288M 00:00:31.915 ==> default: -- Memory Backing: hugepages: 00:00:31.915 ==> default: -- Management MAC: 00:00:31.915 ==> default: -- Loader: 00:00:31.915 ==> default: -- Nvram: 00:00:31.915 ==> default: -- Base box: spdk/fedora38 00:00:31.915 ==> default: -- Storage pool: default 00:00:31.915 ==> default: -- Image: /var/lib/libvirt/images/fedora38-38-1.6-1716830599-074-updated-1705279005_default_1721699740_018197b132e4794273bb.img (20G) 00:00:31.915 ==> default: -- Volume Cache: default 00:00:31.915 ==> default: -- Kernel: 00:00:31.915 ==> default: -- Initrd: 00:00:31.915 ==> default: -- Graphics Type: vnc 00:00:31.915 ==> default: -- Graphics Port: -1 00:00:31.915 ==> default: -- Graphics IP: 127.0.0.1 00:00:31.915 ==> default: -- Graphics Password: Not defined 00:00:31.915 ==> default: -- Video Type: cirrus 00:00:31.915 ==> default: -- Video VRAM: 9216 00:00:31.915 ==> default: -- Sound Type: 00:00:31.915 ==> default: -- Keymap: en-us 00:00:31.915 ==> default: -- TPM Path: 00:00:31.915 ==> default: -- INPUT: type=mouse, bus=ps2 00:00:31.915 ==> default: -- Command line args: 00:00:31.915 ==> default: -> value=-device, 00:00:31.915 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:00:31.915 ==> default: -> value=-drive, 00:00:31.915 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme.img,if=none,id=nvme-0-drive0, 00:00:31.915 ==> default: -> value=-device, 00:00:31.915 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:31.915 ==> default: -> value=-device, 00:00:31.915 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:00:31.915 ==> default: -> value=-drive, 00:00:31.915 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:00:31.915 ==> default: -> value=-device, 00:00:31.915 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:31.915 ==> default: -> value=-drive, 00:00:31.915 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:00:31.915 ==> default: -> value=-device, 00:00:31.915 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:31.915 ==> default: -> value=-drive, 00:00:31.915 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:00:31.915 ==> default: -> value=-device, 00:00:31.915 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:32.175 ==> default: Creating shared folders metadata... 00:00:32.175 ==> default: Starting domain. 00:00:33.555 ==> default: Waiting for domain to get an IP address... 00:00:51.649 ==> default: Waiting for SSH to become available... 00:00:51.649 ==> default: Configuring and enabling network interfaces... 00:00:54.184 default: SSH address: 192.168.121.68:22 00:00:54.184 default: SSH username: vagrant 00:00:54.184 default: SSH auth method: private key 00:00:56.717 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/iscsi-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:04.838 ==> default: Mounting SSHFS shared folder... 00:01:05.772 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/iscsi-vg-autotest/fedora38-libvirt/output => /home/vagrant/spdk_repo/output 00:01:05.772 ==> default: Checking Mount.. 00:01:07.149 ==> default: Folder Successfully Mounted! 00:01:07.149 ==> default: Running provisioner: file... 00:01:07.716 default: ~/.gitconfig => .gitconfig 00:01:08.283 00:01:08.283 SUCCESS! 00:01:08.283 00:01:08.283 cd to /var/jenkins/workspace/iscsi-vg-autotest/fedora38-libvirt and type "vagrant ssh" to use. 00:01:08.283 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:08.283 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/iscsi-vg-autotest/fedora38-libvirt" to destroy all trace of vm. 00:01:08.283 00:01:08.292 [Pipeline] } 00:01:08.310 [Pipeline] // stage 00:01:08.319 [Pipeline] dir 00:01:08.320 Running in /var/jenkins/workspace/iscsi-vg-autotest/fedora38-libvirt 00:01:08.322 [Pipeline] { 00:01:08.335 [Pipeline] catchError 00:01:08.337 [Pipeline] { 00:01:08.350 [Pipeline] sh 00:01:08.629 + vagrant ssh-config --host vagrant 00:01:08.630 + sed -ne /^Host/,$p 00:01:08.630 + tee ssh_conf 00:01:11.923 Host vagrant 00:01:11.923 HostName 192.168.121.68 00:01:11.923 User vagrant 00:01:11.923 Port 22 00:01:11.923 UserKnownHostsFile /dev/null 00:01:11.923 StrictHostKeyChecking no 00:01:11.923 PasswordAuthentication no 00:01:11.923 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora38/38-1.6-1716830599-074-updated-1705279005/libvirt/fedora38 00:01:11.923 IdentitiesOnly yes 00:01:11.923 LogLevel FATAL 00:01:11.923 ForwardAgent yes 00:01:11.923 ForwardX11 yes 00:01:11.923 00:01:11.936 [Pipeline] withEnv 00:01:11.938 [Pipeline] { 00:01:11.953 [Pipeline] sh 00:01:12.280 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:01:12.280 source /etc/os-release 00:01:12.280 [[ -e /image.version ]] && img=$(< /image.version) 00:01:12.280 # Minimal, systemd-like check. 00:01:12.280 if [[ -e /.dockerenv ]]; then 00:01:12.280 # Clear garbage from the node's name: 00:01:12.280 # agt-er_autotest_547-896 -> autotest_547-896 00:01:12.280 # $HOSTNAME is the actual container id 00:01:12.280 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:01:12.280 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:01:12.280 # We can assume this is a mount from a host where container is running, 00:01:12.280 # so fetch its hostname to easily identify the target swarm worker. 00:01:12.280 container="$(< /etc/hostname) ($agent)" 00:01:12.280 else 00:01:12.280 # Fallback 00:01:12.280 container=$agent 00:01:12.280 fi 00:01:12.280 fi 00:01:12.280 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:01:12.280 00:01:12.551 [Pipeline] } 00:01:12.571 [Pipeline] // withEnv 00:01:12.580 [Pipeline] setCustomBuildProperty 00:01:12.595 [Pipeline] stage 00:01:12.598 [Pipeline] { (Tests) 00:01:12.616 [Pipeline] sh 00:01:12.896 + scp -F ssh_conf -r /var/jenkins/workspace/iscsi-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:01:13.169 [Pipeline] sh 00:01:13.450 + scp -F ssh_conf -r /var/jenkins/workspace/iscsi-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:01:13.465 [Pipeline] timeout 00:01:13.465 Timeout set to expire in 45 min 00:01:13.467 [Pipeline] { 00:01:13.483 [Pipeline] sh 00:01:13.763 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:01:14.331 HEAD is now at f7b31b2b9 log: declare g_deprecation_epoch static 00:01:14.343 [Pipeline] sh 00:01:14.626 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:01:14.900 [Pipeline] sh 00:01:15.180 + scp -F ssh_conf -r /var/jenkins/workspace/iscsi-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:01:15.454 [Pipeline] sh 00:01:15.735 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=iscsi-vg-autotest ./autoruner.sh spdk_repo 00:01:15.735 ++ readlink -f spdk_repo 00:01:15.994 + DIR_ROOT=/home/vagrant/spdk_repo 00:01:15.994 + [[ -n /home/vagrant/spdk_repo ]] 00:01:15.994 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:01:15.994 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:01:15.994 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:01:15.994 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:01:15.994 + [[ -d /home/vagrant/spdk_repo/output ]] 00:01:15.994 + [[ iscsi-vg-autotest == pkgdep-* ]] 00:01:15.994 + cd /home/vagrant/spdk_repo 00:01:15.994 + source /etc/os-release 00:01:15.994 ++ NAME='Fedora Linux' 00:01:15.994 ++ VERSION='38 (Cloud Edition)' 00:01:15.994 ++ ID=fedora 00:01:15.994 ++ VERSION_ID=38 00:01:15.994 ++ VERSION_CODENAME= 00:01:15.994 ++ PLATFORM_ID=platform:f38 00:01:15.994 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:01:15.994 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:15.994 ++ LOGO=fedora-logo-icon 00:01:15.994 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:01:15.994 ++ HOME_URL=https://fedoraproject.org/ 00:01:15.994 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:01:15.994 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:15.994 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:15.994 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:15.994 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:01:15.994 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:15.994 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:01:15.994 ++ SUPPORT_END=2024-05-14 00:01:15.994 ++ VARIANT='Cloud Edition' 00:01:15.994 ++ VARIANT_ID=cloud 00:01:15.994 + uname -a 00:01:15.994 Linux fedora38-cloud-1716830599-074-updated-1705279005 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:01:15.994 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:01:16.252 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:01:16.252 Hugepages 00:01:16.252 node hugesize free / total 00:01:16.252 node0 1048576kB 0 / 0 00:01:16.512 node0 2048kB 0 / 0 00:01:16.512 00:01:16.512 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:16.512 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:01:16.512 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:01:16.512 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:01:16.512 + rm -f /tmp/spdk-ld-path 00:01:16.512 + source autorun-spdk.conf 00:01:16.512 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:16.512 ++ SPDK_TEST_ISCSI_INITIATOR=1 00:01:16.512 ++ SPDK_TEST_ISCSI=1 00:01:16.512 ++ SPDK_TEST_RBD=1 00:01:16.512 ++ SPDK_RUN_ASAN=1 00:01:16.512 ++ SPDK_RUN_UBSAN=1 00:01:16.512 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:16.512 ++ RUN_NIGHTLY=1 00:01:16.512 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:16.512 + [[ -n '' ]] 00:01:16.512 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:01:16.512 + for M in /var/spdk/build-*-manifest.txt 00:01:16.512 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:16.512 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:16.512 + for M in /var/spdk/build-*-manifest.txt 00:01:16.512 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:16.512 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:16.512 ++ uname 00:01:16.512 + [[ Linux == \L\i\n\u\x ]] 00:01:16.512 + sudo dmesg -T 00:01:16.512 + sudo dmesg --clear 00:01:16.512 + dmesg_pid=5102 00:01:16.512 + sudo dmesg -Tw 00:01:16.512 + [[ Fedora Linux == FreeBSD ]] 00:01:16.512 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:16.512 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:16.512 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:16.512 + [[ -x /usr/src/fio-static/fio ]] 00:01:16.512 + export FIO_BIN=/usr/src/fio-static/fio 00:01:16.512 + FIO_BIN=/usr/src/fio-static/fio 00:01:16.512 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:16.512 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:16.512 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:16.512 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:16.512 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:16.512 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:16.512 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:16.512 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:16.512 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:16.512 Test configuration: 00:01:16.512 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:16.512 SPDK_TEST_ISCSI_INITIATOR=1 00:01:16.512 SPDK_TEST_ISCSI=1 00:01:16.512 SPDK_TEST_RBD=1 00:01:16.512 SPDK_RUN_ASAN=1 00:01:16.512 SPDK_RUN_UBSAN=1 00:01:16.512 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:16.771 RUN_NIGHTLY=1 01:56:25 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:01:16.771 01:56:25 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:16.771 01:56:25 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:16.771 01:56:25 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:16.771 01:56:25 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:16.771 01:56:25 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:16.771 01:56:25 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:16.771 01:56:25 -- paths/export.sh@5 -- $ export PATH 00:01:16.771 01:56:25 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:16.771 01:56:25 -- common/autobuild_common.sh@446 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:01:16.771 01:56:25 -- common/autobuild_common.sh@447 -- $ date +%s 00:01:16.771 01:56:25 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1721699785.XXXXXX 00:01:16.771 01:56:25 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1721699785.YkrS6b 00:01:16.771 01:56:25 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:01:16.771 01:56:25 -- common/autobuild_common.sh@453 -- $ '[' -n '' ']' 00:01:16.771 01:56:25 -- common/autobuild_common.sh@456 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:01:16.771 01:56:25 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:01:16.771 01:56:25 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:01:16.771 01:56:25 -- common/autobuild_common.sh@463 -- $ get_config_params 00:01:16.771 01:56:25 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:01:16.771 01:56:25 -- common/autotest_common.sh@10 -- $ set +x 00:01:16.771 01:56:25 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --with-rbd --enable-ubsan --enable-asan --enable-coverage --with-ublk' 00:01:16.771 01:56:25 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:01:16.771 01:56:25 -- pm/common@17 -- $ local monitor 00:01:16.771 01:56:25 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:16.771 01:56:25 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:16.771 01:56:25 -- pm/common@25 -- $ sleep 1 00:01:16.771 01:56:25 -- pm/common@21 -- $ date +%s 00:01:16.771 01:56:25 -- pm/common@21 -- $ date +%s 00:01:16.771 01:56:25 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1721699785 00:01:16.771 01:56:25 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1721699785 00:01:16.771 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1721699785_collect-vmstat.pm.log 00:01:16.771 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1721699785_collect-cpu-load.pm.log 00:01:17.708 01:56:26 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:01:17.708 01:56:26 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:17.708 01:56:26 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:17.708 01:56:26 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:01:17.708 01:56:26 -- spdk/autobuild.sh@16 -- $ date -u 00:01:17.708 Tue Jul 23 01:56:26 AM UTC 2024 00:01:17.708 01:56:26 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:17.708 v24.09-pre-297-gf7b31b2b9 00:01:17.708 01:56:26 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:01:17.708 01:56:26 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:01:17.708 01:56:26 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:01:17.708 01:56:26 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:17.708 01:56:26 -- common/autotest_common.sh@10 -- $ set +x 00:01:17.708 ************************************ 00:01:17.708 START TEST asan 00:01:17.708 ************************************ 00:01:17.708 using asan 00:01:17.708 01:56:26 asan -- common/autotest_common.sh@1123 -- $ echo 'using asan' 00:01:17.708 00:01:17.708 real 0m0.000s 00:01:17.708 user 0m0.000s 00:01:17.708 sys 0m0.000s 00:01:17.708 01:56:26 asan -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:01:17.708 ************************************ 00:01:17.708 END TEST asan 00:01:17.708 ************************************ 00:01:17.708 01:56:26 asan -- common/autotest_common.sh@10 -- $ set +x 00:01:17.708 01:56:26 -- common/autotest_common.sh@1142 -- $ return 0 00:01:17.708 01:56:26 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:17.708 01:56:26 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:17.708 01:56:26 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:01:17.708 01:56:26 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:17.708 01:56:26 -- common/autotest_common.sh@10 -- $ set +x 00:01:17.708 ************************************ 00:01:17.708 START TEST ubsan 00:01:17.708 ************************************ 00:01:17.708 using ubsan 00:01:17.708 01:56:26 ubsan -- common/autotest_common.sh@1123 -- $ echo 'using ubsan' 00:01:17.708 00:01:17.708 real 0m0.000s 00:01:17.708 user 0m0.000s 00:01:17.708 sys 0m0.000s 00:01:17.708 01:56:26 ubsan -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:01:17.708 01:56:26 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:17.708 ************************************ 00:01:17.708 END TEST ubsan 00:01:17.708 ************************************ 00:01:17.967 01:56:26 -- common/autotest_common.sh@1142 -- $ return 0 00:01:17.967 01:56:26 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:17.967 01:56:26 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:17.967 01:56:26 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:17.967 01:56:26 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:17.967 01:56:26 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:17.967 01:56:26 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:17.968 01:56:26 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:17.968 01:56:26 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:17.968 01:56:26 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --with-rbd --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-shared 00:01:17.968 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:01:17.968 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:01:18.534 Using 'verbs' RDMA provider 00:01:34.364 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:01:46.568 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:01:46.568 Creating mk/config.mk...done. 00:01:46.568 Creating mk/cc.flags.mk...done. 00:01:46.568 Type 'make' to build. 00:01:46.568 01:56:55 -- spdk/autobuild.sh@69 -- $ run_test make make -j10 00:01:46.568 01:56:55 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:01:46.568 01:56:55 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:46.568 01:56:55 -- common/autotest_common.sh@10 -- $ set +x 00:01:46.568 ************************************ 00:01:46.568 START TEST make 00:01:46.568 ************************************ 00:01:46.568 01:56:55 make -- common/autotest_common.sh@1123 -- $ make -j10 00:01:46.841 make[1]: Nothing to be done for 'all'. 00:01:57.059 The Meson build system 00:01:57.059 Version: 1.3.1 00:01:57.059 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:01:57.059 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:01:57.059 Build type: native build 00:01:57.059 Program cat found: YES (/usr/bin/cat) 00:01:57.059 Project name: DPDK 00:01:57.059 Project version: 24.03.0 00:01:57.059 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:57.059 C linker for the host machine: cc ld.bfd 2.39-16 00:01:57.059 Host machine cpu family: x86_64 00:01:57.059 Host machine cpu: x86_64 00:01:57.059 Message: ## Building in Developer Mode ## 00:01:57.059 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:57.059 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:01:57.059 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:57.059 Program python3 found: YES (/usr/bin/python3) 00:01:57.059 Program cat found: YES (/usr/bin/cat) 00:01:57.059 Compiler for C supports arguments -march=native: YES 00:01:57.059 Checking for size of "void *" : 8 00:01:57.059 Checking for size of "void *" : 8 (cached) 00:01:57.059 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:01:57.059 Library m found: YES 00:01:57.059 Library numa found: YES 00:01:57.059 Has header "numaif.h" : YES 00:01:57.059 Library fdt found: NO 00:01:57.059 Library execinfo found: NO 00:01:57.059 Has header "execinfo.h" : YES 00:01:57.059 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:57.059 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:57.059 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:57.059 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:57.059 Run-time dependency openssl found: YES 3.0.9 00:01:57.059 Run-time dependency libpcap found: YES 1.10.4 00:01:57.059 Has header "pcap.h" with dependency libpcap: YES 00:01:57.059 Compiler for C supports arguments -Wcast-qual: YES 00:01:57.059 Compiler for C supports arguments -Wdeprecated: YES 00:01:57.059 Compiler for C supports arguments -Wformat: YES 00:01:57.059 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:57.059 Compiler for C supports arguments -Wformat-security: NO 00:01:57.059 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:57.059 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:57.059 Compiler for C supports arguments -Wnested-externs: YES 00:01:57.059 Compiler for C supports arguments -Wold-style-definition: YES 00:01:57.059 Compiler for C supports arguments -Wpointer-arith: YES 00:01:57.059 Compiler for C supports arguments -Wsign-compare: YES 00:01:57.059 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:57.059 Compiler for C supports arguments -Wundef: YES 00:01:57.059 Compiler for C supports arguments -Wwrite-strings: YES 00:01:57.059 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:57.059 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:57.059 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:57.059 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:57.059 Program objdump found: YES (/usr/bin/objdump) 00:01:57.059 Compiler for C supports arguments -mavx512f: YES 00:01:57.059 Checking if "AVX512 checking" compiles: YES 00:01:57.059 Fetching value of define "__SSE4_2__" : 1 00:01:57.059 Fetching value of define "__AES__" : 1 00:01:57.059 Fetching value of define "__AVX__" : 1 00:01:57.059 Fetching value of define "__AVX2__" : 1 00:01:57.059 Fetching value of define "__AVX512BW__" : (undefined) 00:01:57.059 Fetching value of define "__AVX512CD__" : (undefined) 00:01:57.059 Fetching value of define "__AVX512DQ__" : (undefined) 00:01:57.059 Fetching value of define "__AVX512F__" : (undefined) 00:01:57.059 Fetching value of define "__AVX512VL__" : (undefined) 00:01:57.059 Fetching value of define "__PCLMUL__" : 1 00:01:57.059 Fetching value of define "__RDRND__" : 1 00:01:57.059 Fetching value of define "__RDSEED__" : 1 00:01:57.059 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:57.059 Fetching value of define "__znver1__" : (undefined) 00:01:57.059 Fetching value of define "__znver2__" : (undefined) 00:01:57.059 Fetching value of define "__znver3__" : (undefined) 00:01:57.059 Fetching value of define "__znver4__" : (undefined) 00:01:57.059 Library asan found: YES 00:01:57.059 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:57.059 Message: lib/log: Defining dependency "log" 00:01:57.059 Message: lib/kvargs: Defining dependency "kvargs" 00:01:57.059 Message: lib/telemetry: Defining dependency "telemetry" 00:01:57.059 Library rt found: YES 00:01:57.059 Checking for function "getentropy" : NO 00:01:57.059 Message: lib/eal: Defining dependency "eal" 00:01:57.059 Message: lib/ring: Defining dependency "ring" 00:01:57.059 Message: lib/rcu: Defining dependency "rcu" 00:01:57.059 Message: lib/mempool: Defining dependency "mempool" 00:01:57.059 Message: lib/mbuf: Defining dependency "mbuf" 00:01:57.059 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:57.059 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:57.059 Compiler for C supports arguments -mpclmul: YES 00:01:57.059 Compiler for C supports arguments -maes: YES 00:01:57.059 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:57.059 Compiler for C supports arguments -mavx512bw: YES 00:01:57.059 Compiler for C supports arguments -mavx512dq: YES 00:01:57.059 Compiler for C supports arguments -mavx512vl: YES 00:01:57.059 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:57.059 Compiler for C supports arguments -mavx2: YES 00:01:57.059 Compiler for C supports arguments -mavx: YES 00:01:57.059 Message: lib/net: Defining dependency "net" 00:01:57.060 Message: lib/meter: Defining dependency "meter" 00:01:57.060 Message: lib/ethdev: Defining dependency "ethdev" 00:01:57.060 Message: lib/pci: Defining dependency "pci" 00:01:57.060 Message: lib/cmdline: Defining dependency "cmdline" 00:01:57.060 Message: lib/hash: Defining dependency "hash" 00:01:57.060 Message: lib/timer: Defining dependency "timer" 00:01:57.060 Message: lib/compressdev: Defining dependency "compressdev" 00:01:57.060 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:57.060 Message: lib/dmadev: Defining dependency "dmadev" 00:01:57.060 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:57.060 Message: lib/power: Defining dependency "power" 00:01:57.060 Message: lib/reorder: Defining dependency "reorder" 00:01:57.060 Message: lib/security: Defining dependency "security" 00:01:57.060 Has header "linux/userfaultfd.h" : YES 00:01:57.060 Has header "linux/vduse.h" : YES 00:01:57.060 Message: lib/vhost: Defining dependency "vhost" 00:01:57.060 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:57.060 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:57.060 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:57.060 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:57.060 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:57.060 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:57.060 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:57.060 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:57.060 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:57.060 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:57.060 Program doxygen found: YES (/usr/bin/doxygen) 00:01:57.060 Configuring doxy-api-html.conf using configuration 00:01:57.060 Configuring doxy-api-man.conf using configuration 00:01:57.060 Program mandb found: YES (/usr/bin/mandb) 00:01:57.060 Program sphinx-build found: NO 00:01:57.060 Configuring rte_build_config.h using configuration 00:01:57.060 Message: 00:01:57.060 ================= 00:01:57.060 Applications Enabled 00:01:57.060 ================= 00:01:57.060 00:01:57.060 apps: 00:01:57.060 00:01:57.060 00:01:57.060 Message: 00:01:57.060 ================= 00:01:57.060 Libraries Enabled 00:01:57.060 ================= 00:01:57.060 00:01:57.060 libs: 00:01:57.060 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:57.060 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:57.060 cryptodev, dmadev, power, reorder, security, vhost, 00:01:57.060 00:01:57.060 Message: 00:01:57.060 =============== 00:01:57.060 Drivers Enabled 00:01:57.060 =============== 00:01:57.060 00:01:57.060 common: 00:01:57.060 00:01:57.060 bus: 00:01:57.060 pci, vdev, 00:01:57.060 mempool: 00:01:57.060 ring, 00:01:57.060 dma: 00:01:57.060 00:01:57.060 net: 00:01:57.060 00:01:57.060 crypto: 00:01:57.060 00:01:57.060 compress: 00:01:57.060 00:01:57.060 vdpa: 00:01:57.060 00:01:57.060 00:01:57.060 Message: 00:01:57.060 ================= 00:01:57.060 Content Skipped 00:01:57.060 ================= 00:01:57.060 00:01:57.060 apps: 00:01:57.060 dumpcap: explicitly disabled via build config 00:01:57.060 graph: explicitly disabled via build config 00:01:57.060 pdump: explicitly disabled via build config 00:01:57.060 proc-info: explicitly disabled via build config 00:01:57.060 test-acl: explicitly disabled via build config 00:01:57.060 test-bbdev: explicitly disabled via build config 00:01:57.060 test-cmdline: explicitly disabled via build config 00:01:57.060 test-compress-perf: explicitly disabled via build config 00:01:57.060 test-crypto-perf: explicitly disabled via build config 00:01:57.060 test-dma-perf: explicitly disabled via build config 00:01:57.060 test-eventdev: explicitly disabled via build config 00:01:57.060 test-fib: explicitly disabled via build config 00:01:57.060 test-flow-perf: explicitly disabled via build config 00:01:57.060 test-gpudev: explicitly disabled via build config 00:01:57.060 test-mldev: explicitly disabled via build config 00:01:57.060 test-pipeline: explicitly disabled via build config 00:01:57.060 test-pmd: explicitly disabled via build config 00:01:57.060 test-regex: explicitly disabled via build config 00:01:57.060 test-sad: explicitly disabled via build config 00:01:57.060 test-security-perf: explicitly disabled via build config 00:01:57.060 00:01:57.060 libs: 00:01:57.060 argparse: explicitly disabled via build config 00:01:57.060 metrics: explicitly disabled via build config 00:01:57.060 acl: explicitly disabled via build config 00:01:57.060 bbdev: explicitly disabled via build config 00:01:57.060 bitratestats: explicitly disabled via build config 00:01:57.060 bpf: explicitly disabled via build config 00:01:57.060 cfgfile: explicitly disabled via build config 00:01:57.060 distributor: explicitly disabled via build config 00:01:57.060 efd: explicitly disabled via build config 00:01:57.060 eventdev: explicitly disabled via build config 00:01:57.060 dispatcher: explicitly disabled via build config 00:01:57.060 gpudev: explicitly disabled via build config 00:01:57.060 gro: explicitly disabled via build config 00:01:57.060 gso: explicitly disabled via build config 00:01:57.060 ip_frag: explicitly disabled via build config 00:01:57.060 jobstats: explicitly disabled via build config 00:01:57.060 latencystats: explicitly disabled via build config 00:01:57.060 lpm: explicitly disabled via build config 00:01:57.060 member: explicitly disabled via build config 00:01:57.060 pcapng: explicitly disabled via build config 00:01:57.060 rawdev: explicitly disabled via build config 00:01:57.060 regexdev: explicitly disabled via build config 00:01:57.060 mldev: explicitly disabled via build config 00:01:57.060 rib: explicitly disabled via build config 00:01:57.060 sched: explicitly disabled via build config 00:01:57.060 stack: explicitly disabled via build config 00:01:57.060 ipsec: explicitly disabled via build config 00:01:57.060 pdcp: explicitly disabled via build config 00:01:57.060 fib: explicitly disabled via build config 00:01:57.060 port: explicitly disabled via build config 00:01:57.060 pdump: explicitly disabled via build config 00:01:57.060 table: explicitly disabled via build config 00:01:57.060 pipeline: explicitly disabled via build config 00:01:57.060 graph: explicitly disabled via build config 00:01:57.060 node: explicitly disabled via build config 00:01:57.060 00:01:57.060 drivers: 00:01:57.060 common/cpt: not in enabled drivers build config 00:01:57.060 common/dpaax: not in enabled drivers build config 00:01:57.060 common/iavf: not in enabled drivers build config 00:01:57.060 common/idpf: not in enabled drivers build config 00:01:57.060 common/ionic: not in enabled drivers build config 00:01:57.060 common/mvep: not in enabled drivers build config 00:01:57.060 common/octeontx: not in enabled drivers build config 00:01:57.060 bus/auxiliary: not in enabled drivers build config 00:01:57.060 bus/cdx: not in enabled drivers build config 00:01:57.060 bus/dpaa: not in enabled drivers build config 00:01:57.060 bus/fslmc: not in enabled drivers build config 00:01:57.060 bus/ifpga: not in enabled drivers build config 00:01:57.060 bus/platform: not in enabled drivers build config 00:01:57.060 bus/uacce: not in enabled drivers build config 00:01:57.060 bus/vmbus: not in enabled drivers build config 00:01:57.060 common/cnxk: not in enabled drivers build config 00:01:57.060 common/mlx5: not in enabled drivers build config 00:01:57.060 common/nfp: not in enabled drivers build config 00:01:57.060 common/nitrox: not in enabled drivers build config 00:01:57.060 common/qat: not in enabled drivers build config 00:01:57.060 common/sfc_efx: not in enabled drivers build config 00:01:57.060 mempool/bucket: not in enabled drivers build config 00:01:57.060 mempool/cnxk: not in enabled drivers build config 00:01:57.060 mempool/dpaa: not in enabled drivers build config 00:01:57.060 mempool/dpaa2: not in enabled drivers build config 00:01:57.060 mempool/octeontx: not in enabled drivers build config 00:01:57.060 mempool/stack: not in enabled drivers build config 00:01:57.060 dma/cnxk: not in enabled drivers build config 00:01:57.060 dma/dpaa: not in enabled drivers build config 00:01:57.060 dma/dpaa2: not in enabled drivers build config 00:01:57.060 dma/hisilicon: not in enabled drivers build config 00:01:57.060 dma/idxd: not in enabled drivers build config 00:01:57.060 dma/ioat: not in enabled drivers build config 00:01:57.060 dma/skeleton: not in enabled drivers build config 00:01:57.060 net/af_packet: not in enabled drivers build config 00:01:57.060 net/af_xdp: not in enabled drivers build config 00:01:57.060 net/ark: not in enabled drivers build config 00:01:57.060 net/atlantic: not in enabled drivers build config 00:01:57.060 net/avp: not in enabled drivers build config 00:01:57.060 net/axgbe: not in enabled drivers build config 00:01:57.060 net/bnx2x: not in enabled drivers build config 00:01:57.060 net/bnxt: not in enabled drivers build config 00:01:57.060 net/bonding: not in enabled drivers build config 00:01:57.060 net/cnxk: not in enabled drivers build config 00:01:57.060 net/cpfl: not in enabled drivers build config 00:01:57.060 net/cxgbe: not in enabled drivers build config 00:01:57.060 net/dpaa: not in enabled drivers build config 00:01:57.060 net/dpaa2: not in enabled drivers build config 00:01:57.060 net/e1000: not in enabled drivers build config 00:01:57.060 net/ena: not in enabled drivers build config 00:01:57.060 net/enetc: not in enabled drivers build config 00:01:57.060 net/enetfec: not in enabled drivers build config 00:01:57.060 net/enic: not in enabled drivers build config 00:01:57.060 net/failsafe: not in enabled drivers build config 00:01:57.060 net/fm10k: not in enabled drivers build config 00:01:57.060 net/gve: not in enabled drivers build config 00:01:57.060 net/hinic: not in enabled drivers build config 00:01:57.060 net/hns3: not in enabled drivers build config 00:01:57.061 net/i40e: not in enabled drivers build config 00:01:57.061 net/iavf: not in enabled drivers build config 00:01:57.061 net/ice: not in enabled drivers build config 00:01:57.061 net/idpf: not in enabled drivers build config 00:01:57.061 net/igc: not in enabled drivers build config 00:01:57.061 net/ionic: not in enabled drivers build config 00:01:57.061 net/ipn3ke: not in enabled drivers build config 00:01:57.061 net/ixgbe: not in enabled drivers build config 00:01:57.061 net/mana: not in enabled drivers build config 00:01:57.061 net/memif: not in enabled drivers build config 00:01:57.061 net/mlx4: not in enabled drivers build config 00:01:57.061 net/mlx5: not in enabled drivers build config 00:01:57.061 net/mvneta: not in enabled drivers build config 00:01:57.061 net/mvpp2: not in enabled drivers build config 00:01:57.061 net/netvsc: not in enabled drivers build config 00:01:57.061 net/nfb: not in enabled drivers build config 00:01:57.061 net/nfp: not in enabled drivers build config 00:01:57.061 net/ngbe: not in enabled drivers build config 00:01:57.061 net/null: not in enabled drivers build config 00:01:57.061 net/octeontx: not in enabled drivers build config 00:01:57.061 net/octeon_ep: not in enabled drivers build config 00:01:57.061 net/pcap: not in enabled drivers build config 00:01:57.061 net/pfe: not in enabled drivers build config 00:01:57.061 net/qede: not in enabled drivers build config 00:01:57.061 net/ring: not in enabled drivers build config 00:01:57.061 net/sfc: not in enabled drivers build config 00:01:57.061 net/softnic: not in enabled drivers build config 00:01:57.061 net/tap: not in enabled drivers build config 00:01:57.061 net/thunderx: not in enabled drivers build config 00:01:57.061 net/txgbe: not in enabled drivers build config 00:01:57.061 net/vdev_netvsc: not in enabled drivers build config 00:01:57.061 net/vhost: not in enabled drivers build config 00:01:57.061 net/virtio: not in enabled drivers build config 00:01:57.061 net/vmxnet3: not in enabled drivers build config 00:01:57.061 raw/*: missing internal dependency, "rawdev" 00:01:57.061 crypto/armv8: not in enabled drivers build config 00:01:57.061 crypto/bcmfs: not in enabled drivers build config 00:01:57.061 crypto/caam_jr: not in enabled drivers build config 00:01:57.061 crypto/ccp: not in enabled drivers build config 00:01:57.061 crypto/cnxk: not in enabled drivers build config 00:01:57.061 crypto/dpaa_sec: not in enabled drivers build config 00:01:57.061 crypto/dpaa2_sec: not in enabled drivers build config 00:01:57.061 crypto/ipsec_mb: not in enabled drivers build config 00:01:57.061 crypto/mlx5: not in enabled drivers build config 00:01:57.061 crypto/mvsam: not in enabled drivers build config 00:01:57.061 crypto/nitrox: not in enabled drivers build config 00:01:57.061 crypto/null: not in enabled drivers build config 00:01:57.061 crypto/octeontx: not in enabled drivers build config 00:01:57.061 crypto/openssl: not in enabled drivers build config 00:01:57.061 crypto/scheduler: not in enabled drivers build config 00:01:57.061 crypto/uadk: not in enabled drivers build config 00:01:57.061 crypto/virtio: not in enabled drivers build config 00:01:57.061 compress/isal: not in enabled drivers build config 00:01:57.061 compress/mlx5: not in enabled drivers build config 00:01:57.061 compress/nitrox: not in enabled drivers build config 00:01:57.061 compress/octeontx: not in enabled drivers build config 00:01:57.061 compress/zlib: not in enabled drivers build config 00:01:57.061 regex/*: missing internal dependency, "regexdev" 00:01:57.061 ml/*: missing internal dependency, "mldev" 00:01:57.061 vdpa/ifc: not in enabled drivers build config 00:01:57.061 vdpa/mlx5: not in enabled drivers build config 00:01:57.061 vdpa/nfp: not in enabled drivers build config 00:01:57.061 vdpa/sfc: not in enabled drivers build config 00:01:57.061 event/*: missing internal dependency, "eventdev" 00:01:57.061 baseband/*: missing internal dependency, "bbdev" 00:01:57.061 gpu/*: missing internal dependency, "gpudev" 00:01:57.061 00:01:57.061 00:01:57.061 Build targets in project: 85 00:01:57.061 00:01:57.061 DPDK 24.03.0 00:01:57.061 00:01:57.061 User defined options 00:01:57.061 buildtype : debug 00:01:57.061 default_library : shared 00:01:57.061 libdir : lib 00:01:57.061 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:01:57.061 b_sanitize : address 00:01:57.061 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:01:57.061 c_link_args : 00:01:57.061 cpu_instruction_set: native 00:01:57.061 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:01:57.061 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:01:57.061 enable_docs : false 00:01:57.061 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:01:57.061 enable_kmods : false 00:01:57.061 max_lcores : 128 00:01:57.061 tests : false 00:01:57.061 00:01:57.061 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:57.061 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:01:57.061 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:57.061 [2/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:57.061 [3/268] Linking static target lib/librte_kvargs.a 00:01:57.061 [4/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:57.061 [5/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:57.061 [6/268] Linking static target lib/librte_log.a 00:01:57.061 [7/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.061 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:57.320 [9/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:57.320 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:57.320 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:57.320 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:57.320 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:57.579 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:57.579 [15/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:57.579 [16/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:57.579 [17/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.579 [18/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:57.579 [19/268] Linking static target lib/librte_telemetry.a 00:01:57.837 [20/268] Linking target lib/librte_log.so.24.1 00:01:57.837 [21/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:01:58.096 [22/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:58.096 [23/268] Linking target lib/librte_kvargs.so.24.1 00:01:58.096 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:58.096 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:58.096 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:58.354 [27/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:01:58.354 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:58.354 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:58.354 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:58.354 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:58.354 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:58.612 [33/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.612 [34/268] Linking target lib/librte_telemetry.so.24.1 00:01:58.871 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:58.871 [36/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:01:58.871 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:59.129 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:59.129 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:59.129 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:59.129 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:59.129 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:59.129 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:59.129 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:59.129 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:59.388 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:59.388 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:59.646 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:59.646 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:59.646 [50/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:59.905 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:59.905 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:59.905 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:59.905 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:00.163 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:00.163 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:00.163 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:00.163 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:00.421 [59/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:00.421 [60/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:00.421 [61/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:00.679 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:00.679 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:00.679 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:00.937 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:00.937 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:01.195 [67/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:01.195 [68/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:01.453 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:01.453 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:01.453 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:01.453 [72/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:01.453 [73/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:01.453 [74/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:01.453 [75/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:01.712 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:01.712 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:01.971 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:01.971 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:01.971 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:01.971 [81/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:02.230 [82/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:02.230 [83/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:02.488 [84/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:02.488 [85/268] Linking static target lib/librte_ring.a 00:02:02.488 [86/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:02.488 [87/268] Linking static target lib/librte_eal.a 00:02:02.747 [88/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:02.747 [89/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:02.747 [90/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:02.747 [91/268] Linking static target lib/librte_rcu.a 00:02:03.006 [92/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:03.006 [93/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:03.006 [94/268] Linking static target lib/librte_mempool.a 00:02:03.006 [95/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.006 [96/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:03.006 [97/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:03.265 [98/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.524 [99/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:03.524 [100/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:03.524 [101/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:04.091 [102/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:04.091 [103/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:04.091 [104/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:04.091 [105/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:04.091 [106/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:04.091 [107/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:04.091 [108/268] Linking static target lib/librte_mbuf.a 00:02:04.091 [109/268] Linking static target lib/librte_net.a 00:02:04.350 [110/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.608 [111/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:04.608 [112/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:04.608 [113/268] Linking static target lib/librte_meter.a 00:02:04.608 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:04.608 [115/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:04.608 [116/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.866 [117/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:04.866 [118/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.124 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:05.124 [120/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.408 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:05.408 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:05.408 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:05.679 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:05.679 [125/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:05.679 [126/268] Linking static target lib/librte_pci.a 00:02:05.937 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:05.937 [128/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:05.937 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:05.937 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:05.937 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:05.937 [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:06.195 [133/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.195 [134/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:06.195 [135/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:06.195 [136/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:06.195 [137/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:06.195 [138/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:06.195 [139/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:06.195 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:06.195 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:06.195 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:06.195 [143/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:06.453 [144/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:06.453 [145/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:06.712 [146/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:06.712 [147/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:06.712 [148/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:06.971 [149/268] Linking static target lib/librte_cmdline.a 00:02:06.971 [150/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:07.229 [151/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:07.229 [152/268] Linking static target lib/librte_timer.a 00:02:07.229 [153/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:07.229 [154/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:07.229 [155/268] Linking static target lib/librte_ethdev.a 00:02:07.489 [156/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:07.489 [157/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:07.489 [158/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:07.747 [159/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.747 [160/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:07.747 [161/268] Linking static target lib/librte_compressdev.a 00:02:07.747 [162/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:07.747 [163/268] Linking static target lib/librte_hash.a 00:02:07.747 [164/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:08.005 [165/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:08.264 [166/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:08.264 [167/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:08.264 [168/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:08.264 [169/268] Linking static target lib/librte_dmadev.a 00:02:08.522 [170/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.522 [171/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:08.522 [172/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:08.522 [173/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:08.522 [174/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.781 [175/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:09.040 [176/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.040 [177/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:09.040 [178/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:09.040 [179/268] Linking static target lib/librte_cryptodev.a 00:02:09.040 [180/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:09.040 [181/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.298 [182/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:09.298 [183/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:09.298 [184/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:09.556 [185/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:09.556 [186/268] Linking static target lib/librte_power.a 00:02:09.814 [187/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:09.814 [188/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:09.814 [189/268] Linking static target lib/librte_reorder.a 00:02:09.814 [190/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:09.814 [191/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:10.072 [192/268] Linking static target lib/librte_security.a 00:02:10.072 [193/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:10.330 [194/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:10.330 [195/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.588 [196/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.588 [197/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.588 [198/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:10.846 [199/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:10.846 [200/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:10.846 [201/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:11.104 [202/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:11.104 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:11.104 [204/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.104 [205/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:11.362 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:11.363 [207/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:11.620 [208/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:11.620 [209/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:11.620 [210/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:11.620 [211/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:11.878 [212/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:11.878 [213/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:11.878 [214/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:11.878 [215/268] Linking static target drivers/librte_bus_pci.a 00:02:11.878 [216/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:11.878 [217/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:11.878 [218/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:11.878 [219/268] Linking static target drivers/librte_bus_vdev.a 00:02:11.878 [220/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:11.878 [221/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:12.136 [222/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:12.136 [223/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.136 [224/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:12.136 [225/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:12.136 [226/268] Linking static target drivers/librte_mempool_ring.a 00:02:12.394 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.961 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:12.961 [229/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.961 [230/268] Linking target lib/librte_eal.so.24.1 00:02:13.219 [231/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:13.219 [232/268] Linking target lib/librte_ring.so.24.1 00:02:13.219 [233/268] Linking target lib/librte_meter.so.24.1 00:02:13.219 [234/268] Linking target lib/librte_pci.so.24.1 00:02:13.219 [235/268] Linking target lib/librte_dmadev.so.24.1 00:02:13.219 [236/268] Linking target drivers/librte_bus_vdev.so.24.1 00:02:13.219 [237/268] Linking target lib/librte_timer.so.24.1 00:02:13.479 [238/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:13.479 [239/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:13.479 [240/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:13.479 [241/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:13.479 [242/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:13.479 [243/268] Linking target lib/librte_rcu.so.24.1 00:02:13.479 [244/268] Linking target lib/librte_mempool.so.24.1 00:02:13.479 [245/268] Linking target drivers/librte_bus_pci.so.24.1 00:02:13.479 [246/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:13.479 [247/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:13.738 [248/268] Linking target drivers/librte_mempool_ring.so.24.1 00:02:13.738 [249/268] Linking target lib/librte_mbuf.so.24.1 00:02:13.738 [250/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:13.738 [251/268] Linking target lib/librte_compressdev.so.24.1 00:02:13.738 [252/268] Linking target lib/librte_reorder.so.24.1 00:02:13.738 [253/268] Linking target lib/librte_net.so.24.1 00:02:13.738 [254/268] Linking target lib/librte_cryptodev.so.24.1 00:02:13.996 [255/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:13.996 [256/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:13.996 [257/268] Linking target lib/librte_hash.so.24.1 00:02:13.996 [258/268] Linking target lib/librte_cmdline.so.24.1 00:02:13.996 [259/268] Linking target lib/librte_security.so.24.1 00:02:14.255 [260/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:14.514 [261/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.772 [262/268] Linking target lib/librte_ethdev.so.24.1 00:02:15.031 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:15.031 [264/268] Linking target lib/librte_power.so.24.1 00:02:15.967 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:15.967 [266/268] Linking static target lib/librte_vhost.a 00:02:17.871 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.129 [268/268] Linking target lib/librte_vhost.so.24.1 00:02:18.129 INFO: autodetecting backend as ninja 00:02:18.129 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:02:19.506 CC lib/ut/ut.o 00:02:19.506 CC lib/log/log.o 00:02:19.506 CC lib/log/log_flags.o 00:02:19.506 CC lib/log/log_deprecated.o 00:02:19.506 CC lib/ut_mock/mock.o 00:02:19.506 LIB libspdk_log.a 00:02:19.506 LIB libspdk_ut.a 00:02:19.506 LIB libspdk_ut_mock.a 00:02:19.506 SO libspdk_ut.so.2.0 00:02:19.506 SO libspdk_log.so.7.0 00:02:19.506 SO libspdk_ut_mock.so.6.0 00:02:19.506 SYMLINK libspdk_ut.so 00:02:19.506 SYMLINK libspdk_ut_mock.so 00:02:19.506 SYMLINK libspdk_log.so 00:02:19.766 CC lib/dma/dma.o 00:02:19.766 CXX lib/trace_parser/trace.o 00:02:19.766 CC lib/ioat/ioat.o 00:02:19.766 CC lib/util/base64.o 00:02:19.766 CC lib/util/bit_array.o 00:02:19.766 CC lib/util/crc16.o 00:02:19.766 CC lib/util/cpuset.o 00:02:19.766 CC lib/util/crc32.o 00:02:19.766 CC lib/util/crc32c.o 00:02:20.025 CC lib/vfio_user/host/vfio_user_pci.o 00:02:20.025 CC lib/vfio_user/host/vfio_user.o 00:02:20.025 CC lib/util/crc32_ieee.o 00:02:20.025 CC lib/util/crc64.o 00:02:20.025 CC lib/util/dif.o 00:02:20.025 LIB libspdk_dma.a 00:02:20.025 CC lib/util/fd.o 00:02:20.025 SO libspdk_dma.so.4.0 00:02:20.025 CC lib/util/fd_group.o 00:02:20.283 CC lib/util/file.o 00:02:20.283 SYMLINK libspdk_dma.so 00:02:20.283 CC lib/util/hexlify.o 00:02:20.283 CC lib/util/iov.o 00:02:20.283 LIB libspdk_ioat.a 00:02:20.283 CC lib/util/math.o 00:02:20.283 SO libspdk_ioat.so.7.0 00:02:20.283 CC lib/util/net.o 00:02:20.283 LIB libspdk_vfio_user.a 00:02:20.283 CC lib/util/pipe.o 00:02:20.283 SYMLINK libspdk_ioat.so 00:02:20.283 SO libspdk_vfio_user.so.5.0 00:02:20.283 CC lib/util/strerror_tls.o 00:02:20.283 CC lib/util/string.o 00:02:20.283 CC lib/util/uuid.o 00:02:20.283 SYMLINK libspdk_vfio_user.so 00:02:20.283 CC lib/util/xor.o 00:02:20.283 CC lib/util/zipf.o 00:02:20.851 LIB libspdk_util.a 00:02:20.851 SO libspdk_util.so.10.0 00:02:20.851 LIB libspdk_trace_parser.a 00:02:20.851 SO libspdk_trace_parser.so.5.0 00:02:21.110 SYMLINK libspdk_util.so 00:02:21.110 SYMLINK libspdk_trace_parser.so 00:02:21.110 CC lib/idxd/idxd.o 00:02:21.110 CC lib/rdma_provider/common.o 00:02:21.110 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:21.110 CC lib/env_dpdk/env.o 00:02:21.110 CC lib/idxd/idxd_user.o 00:02:21.110 CC lib/idxd/idxd_kernel.o 00:02:21.110 CC lib/json/json_parse.o 00:02:21.110 CC lib/conf/conf.o 00:02:21.110 CC lib/vmd/vmd.o 00:02:21.110 CC lib/rdma_utils/rdma_utils.o 00:02:21.369 CC lib/env_dpdk/memory.o 00:02:21.369 CC lib/env_dpdk/pci.o 00:02:21.369 LIB libspdk_rdma_provider.a 00:02:21.369 SO libspdk_rdma_provider.so.6.0 00:02:21.369 CC lib/json/json_util.o 00:02:21.369 LIB libspdk_conf.a 00:02:21.369 CC lib/json/json_write.o 00:02:21.369 LIB libspdk_rdma_utils.a 00:02:21.369 SO libspdk_conf.so.6.0 00:02:21.671 SO libspdk_rdma_utils.so.1.0 00:02:21.671 SYMLINK libspdk_rdma_provider.so 00:02:21.671 CC lib/env_dpdk/init.o 00:02:21.671 SYMLINK libspdk_conf.so 00:02:21.671 CC lib/env_dpdk/threads.o 00:02:21.671 SYMLINK libspdk_rdma_utils.so 00:02:21.671 CC lib/env_dpdk/pci_ioat.o 00:02:21.671 CC lib/env_dpdk/pci_virtio.o 00:02:21.672 CC lib/env_dpdk/pci_vmd.o 00:02:21.672 CC lib/env_dpdk/pci_idxd.o 00:02:21.672 LIB libspdk_json.a 00:02:21.672 SO libspdk_json.so.6.0 00:02:21.931 CC lib/env_dpdk/pci_event.o 00:02:21.931 CC lib/env_dpdk/sigbus_handler.o 00:02:21.931 CC lib/env_dpdk/pci_dpdk.o 00:02:21.931 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:21.931 SYMLINK libspdk_json.so 00:02:21.931 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:21.931 LIB libspdk_idxd.a 00:02:21.931 SO libspdk_idxd.so.12.0 00:02:21.931 CC lib/vmd/led.o 00:02:21.931 SYMLINK libspdk_idxd.so 00:02:22.190 CC lib/jsonrpc/jsonrpc_server.o 00:02:22.190 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:22.190 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:22.190 CC lib/jsonrpc/jsonrpc_client.o 00:02:22.190 LIB libspdk_vmd.a 00:02:22.190 SO libspdk_vmd.so.6.0 00:02:22.190 SYMLINK libspdk_vmd.so 00:02:22.448 LIB libspdk_jsonrpc.a 00:02:22.448 SO libspdk_jsonrpc.so.6.0 00:02:22.448 SYMLINK libspdk_jsonrpc.so 00:02:22.707 CC lib/rpc/rpc.o 00:02:22.966 LIB libspdk_env_dpdk.a 00:02:22.966 LIB libspdk_rpc.a 00:02:22.966 SO libspdk_env_dpdk.so.15.0 00:02:22.966 SO libspdk_rpc.so.6.0 00:02:22.966 SYMLINK libspdk_rpc.so 00:02:23.225 SYMLINK libspdk_env_dpdk.so 00:02:23.225 CC lib/notify/notify_rpc.o 00:02:23.225 CC lib/notify/notify.o 00:02:23.225 CC lib/trace/trace.o 00:02:23.225 CC lib/trace/trace_flags.o 00:02:23.225 CC lib/keyring/keyring_rpc.o 00:02:23.225 CC lib/keyring/keyring.o 00:02:23.225 CC lib/trace/trace_rpc.o 00:02:23.484 LIB libspdk_notify.a 00:02:23.484 SO libspdk_notify.so.6.0 00:02:23.484 LIB libspdk_keyring.a 00:02:23.484 LIB libspdk_trace.a 00:02:23.484 SO libspdk_keyring.so.1.0 00:02:23.484 SYMLINK libspdk_notify.so 00:02:23.484 SO libspdk_trace.so.10.0 00:02:23.743 SYMLINK libspdk_keyring.so 00:02:23.743 SYMLINK libspdk_trace.so 00:02:24.002 CC lib/thread/iobuf.o 00:02:24.002 CC lib/thread/thread.o 00:02:24.002 CC lib/sock/sock.o 00:02:24.002 CC lib/sock/sock_rpc.o 00:02:24.570 LIB libspdk_sock.a 00:02:24.570 SO libspdk_sock.so.10.0 00:02:24.570 SYMLINK libspdk_sock.so 00:02:24.828 CC lib/nvme/nvme_ctrlr.o 00:02:24.828 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:24.828 CC lib/nvme/nvme_fabric.o 00:02:24.828 CC lib/nvme/nvme_ns_cmd.o 00:02:24.828 CC lib/nvme/nvme_ns.o 00:02:24.828 CC lib/nvme/nvme_pcie.o 00:02:24.828 CC lib/nvme/nvme_pcie_common.o 00:02:24.828 CC lib/nvme/nvme_qpair.o 00:02:24.828 CC lib/nvme/nvme.o 00:02:25.764 LIB libspdk_thread.a 00:02:25.764 SO libspdk_thread.so.10.1 00:02:25.764 CC lib/nvme/nvme_quirks.o 00:02:25.764 CC lib/nvme/nvme_transport.o 00:02:25.764 SYMLINK libspdk_thread.so 00:02:25.764 CC lib/nvme/nvme_discovery.o 00:02:25.764 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:26.023 CC lib/accel/accel.o 00:02:26.023 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:26.023 CC lib/nvme/nvme_tcp.o 00:02:26.023 CC lib/nvme/nvme_opal.o 00:02:26.282 CC lib/nvme/nvme_io_msg.o 00:02:26.541 CC lib/blob/blobstore.o 00:02:26.541 CC lib/init/json_config.o 00:02:26.541 CC lib/virtio/virtio.o 00:02:26.541 CC lib/nvme/nvme_poll_group.o 00:02:26.541 CC lib/accel/accel_rpc.o 00:02:26.541 CC lib/accel/accel_sw.o 00:02:26.800 CC lib/init/subsystem.o 00:02:26.800 CC lib/nvme/nvme_zns.o 00:02:26.800 CC lib/nvme/nvme_stubs.o 00:02:26.800 CC lib/virtio/virtio_vhost_user.o 00:02:26.800 CC lib/nvme/nvme_auth.o 00:02:26.800 CC lib/init/subsystem_rpc.o 00:02:27.059 LIB libspdk_accel.a 00:02:27.059 SO libspdk_accel.so.16.0 00:02:27.059 CC lib/init/rpc.o 00:02:27.059 SYMLINK libspdk_accel.so 00:02:27.059 CC lib/nvme/nvme_cuse.o 00:02:27.059 CC lib/virtio/virtio_vfio_user.o 00:02:27.318 LIB libspdk_init.a 00:02:27.318 SO libspdk_init.so.5.0 00:02:27.318 CC lib/nvme/nvme_rdma.o 00:02:27.318 CC lib/bdev/bdev.o 00:02:27.318 CC lib/bdev/bdev_rpc.o 00:02:27.318 SYMLINK libspdk_init.so 00:02:27.318 CC lib/blob/request.o 00:02:27.576 CC lib/virtio/virtio_pci.o 00:02:27.835 CC lib/event/app.o 00:02:27.835 CC lib/event/reactor.o 00:02:27.835 CC lib/blob/zeroes.o 00:02:27.835 CC lib/bdev/bdev_zone.o 00:02:27.835 LIB libspdk_virtio.a 00:02:27.835 SO libspdk_virtio.so.7.0 00:02:27.835 SYMLINK libspdk_virtio.so 00:02:27.835 CC lib/blob/blob_bs_dev.o 00:02:27.835 CC lib/bdev/part.o 00:02:27.835 CC lib/bdev/scsi_nvme.o 00:02:27.835 CC lib/event/log_rpc.o 00:02:28.094 CC lib/event/app_rpc.o 00:02:28.094 CC lib/event/scheduler_static.o 00:02:28.352 LIB libspdk_event.a 00:02:28.352 SO libspdk_event.so.14.0 00:02:28.352 SYMLINK libspdk_event.so 00:02:28.919 LIB libspdk_nvme.a 00:02:28.919 SO libspdk_nvme.so.13.1 00:02:29.177 SYMLINK libspdk_nvme.so 00:02:30.112 LIB libspdk_blob.a 00:02:30.112 SO libspdk_blob.so.11.0 00:02:30.371 LIB libspdk_bdev.a 00:02:30.371 SYMLINK libspdk_blob.so 00:02:30.371 SO libspdk_bdev.so.16.0 00:02:30.371 SYMLINK libspdk_bdev.so 00:02:30.630 CC lib/blobfs/blobfs.o 00:02:30.630 CC lib/blobfs/tree.o 00:02:30.630 CC lib/lvol/lvol.o 00:02:30.630 CC lib/ublk/ublk.o 00:02:30.630 CC lib/ublk/ublk_rpc.o 00:02:30.630 CC lib/ftl/ftl_core.o 00:02:30.630 CC lib/ftl/ftl_init.o 00:02:30.630 CC lib/nvmf/ctrlr.o 00:02:30.630 CC lib/scsi/dev.o 00:02:30.630 CC lib/nbd/nbd.o 00:02:30.630 CC lib/ftl/ftl_layout.o 00:02:30.889 CC lib/ftl/ftl_debug.o 00:02:30.889 CC lib/ftl/ftl_io.o 00:02:30.889 CC lib/scsi/lun.o 00:02:31.148 CC lib/ftl/ftl_sb.o 00:02:31.148 CC lib/ftl/ftl_l2p.o 00:02:31.148 CC lib/ftl/ftl_l2p_flat.o 00:02:31.148 CC lib/nbd/nbd_rpc.o 00:02:31.148 CC lib/ftl/ftl_nv_cache.o 00:02:31.148 CC lib/scsi/port.o 00:02:31.407 CC lib/ftl/ftl_band.o 00:02:31.407 CC lib/nvmf/ctrlr_discovery.o 00:02:31.407 CC lib/ftl/ftl_band_ops.o 00:02:31.407 LIB libspdk_ublk.a 00:02:31.407 LIB libspdk_nbd.a 00:02:31.407 SO libspdk_ublk.so.3.0 00:02:31.407 SO libspdk_nbd.so.7.0 00:02:31.407 CC lib/scsi/scsi.o 00:02:31.407 SYMLINK libspdk_ublk.so 00:02:31.407 SYMLINK libspdk_nbd.so 00:02:31.407 CC lib/ftl/ftl_writer.o 00:02:31.407 CC lib/ftl/ftl_rq.o 00:02:31.666 CC lib/scsi/scsi_bdev.o 00:02:31.666 LIB libspdk_blobfs.a 00:02:31.666 SO libspdk_blobfs.so.10.0 00:02:31.666 CC lib/nvmf/ctrlr_bdev.o 00:02:31.666 CC lib/nvmf/subsystem.o 00:02:31.666 SYMLINK libspdk_blobfs.so 00:02:31.666 CC lib/nvmf/nvmf.o 00:02:31.666 CC lib/ftl/ftl_reloc.o 00:02:31.666 CC lib/ftl/ftl_l2p_cache.o 00:02:31.666 LIB libspdk_lvol.a 00:02:31.666 SO libspdk_lvol.so.10.0 00:02:31.925 SYMLINK libspdk_lvol.so 00:02:31.925 CC lib/ftl/ftl_p2l.o 00:02:31.925 CC lib/scsi/scsi_pr.o 00:02:32.184 CC lib/ftl/mngt/ftl_mngt.o 00:02:32.184 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:32.184 CC lib/scsi/scsi_rpc.o 00:02:32.184 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:32.184 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:32.184 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:32.184 CC lib/scsi/task.o 00:02:32.443 CC lib/nvmf/nvmf_rpc.o 00:02:32.443 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:32.443 CC lib/nvmf/transport.o 00:02:32.443 CC lib/nvmf/tcp.o 00:02:32.443 CC lib/nvmf/stubs.o 00:02:32.443 LIB libspdk_scsi.a 00:02:32.702 SO libspdk_scsi.so.9.0 00:02:32.702 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:32.702 CC lib/nvmf/mdns_server.o 00:02:32.702 CC lib/nvmf/rdma.o 00:02:32.702 SYMLINK libspdk_scsi.so 00:02:32.702 CC lib/nvmf/auth.o 00:02:33.020 CC lib/iscsi/conn.o 00:02:33.020 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:33.020 CC lib/iscsi/init_grp.o 00:02:33.020 CC lib/iscsi/iscsi.o 00:02:33.020 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:33.020 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:33.020 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:33.278 CC lib/iscsi/md5.o 00:02:33.278 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:33.278 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:33.278 CC lib/iscsi/param.o 00:02:33.278 CC lib/iscsi/portal_grp.o 00:02:33.278 CC lib/iscsi/tgt_node.o 00:02:33.536 CC lib/ftl/utils/ftl_conf.o 00:02:33.536 CC lib/iscsi/iscsi_subsystem.o 00:02:33.536 CC lib/iscsi/iscsi_rpc.o 00:02:33.536 CC lib/iscsi/task.o 00:02:33.795 CC lib/ftl/utils/ftl_md.o 00:02:33.795 CC lib/vhost/vhost.o 00:02:33.795 CC lib/ftl/utils/ftl_mempool.o 00:02:33.795 CC lib/ftl/utils/ftl_bitmap.o 00:02:33.795 CC lib/ftl/utils/ftl_property.o 00:02:34.053 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:34.053 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:34.053 CC lib/vhost/vhost_rpc.o 00:02:34.053 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:34.053 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:34.053 CC lib/vhost/vhost_scsi.o 00:02:34.053 CC lib/vhost/vhost_blk.o 00:02:34.312 CC lib/vhost/rte_vhost_user.o 00:02:34.312 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:34.312 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:34.312 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:34.312 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:34.571 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:34.571 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:34.571 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:34.571 CC lib/ftl/base/ftl_base_dev.o 00:02:34.571 CC lib/ftl/base/ftl_base_bdev.o 00:02:34.571 LIB libspdk_iscsi.a 00:02:34.571 CC lib/ftl/ftl_trace.o 00:02:34.830 SO libspdk_iscsi.so.8.0 00:02:34.830 LIB libspdk_ftl.a 00:02:34.830 SYMLINK libspdk_iscsi.so 00:02:35.088 LIB libspdk_nvmf.a 00:02:35.088 SO libspdk_ftl.so.9.0 00:02:35.347 SO libspdk_nvmf.so.19.0 00:02:35.347 LIB libspdk_vhost.a 00:02:35.347 SO libspdk_vhost.so.8.0 00:02:35.606 SYMLINK libspdk_nvmf.so 00:02:35.606 SYMLINK libspdk_vhost.so 00:02:35.606 SYMLINK libspdk_ftl.so 00:02:35.864 CC module/env_dpdk/env_dpdk_rpc.o 00:02:35.864 CC module/sock/posix/posix.o 00:02:35.864 CC module/keyring/file/keyring.o 00:02:35.864 CC module/accel/error/accel_error.o 00:02:35.864 CC module/blob/bdev/blob_bdev.o 00:02:35.864 CC module/accel/iaa/accel_iaa.o 00:02:35.864 CC module/accel/ioat/accel_ioat.o 00:02:35.864 CC module/accel/dsa/accel_dsa.o 00:02:35.864 CC module/keyring/linux/keyring.o 00:02:35.864 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:36.123 LIB libspdk_env_dpdk_rpc.a 00:02:36.124 SO libspdk_env_dpdk_rpc.so.6.0 00:02:36.124 CC module/keyring/linux/keyring_rpc.o 00:02:36.124 CC module/keyring/file/keyring_rpc.o 00:02:36.124 SYMLINK libspdk_env_dpdk_rpc.so 00:02:36.124 CC module/accel/iaa/accel_iaa_rpc.o 00:02:36.124 CC module/accel/error/accel_error_rpc.o 00:02:36.124 CC module/accel/ioat/accel_ioat_rpc.o 00:02:36.124 LIB libspdk_scheduler_dynamic.a 00:02:36.124 SO libspdk_scheduler_dynamic.so.4.0 00:02:36.124 LIB libspdk_keyring_linux.a 00:02:36.124 LIB libspdk_blob_bdev.a 00:02:36.124 CC module/accel/dsa/accel_dsa_rpc.o 00:02:36.124 LIB libspdk_accel_iaa.a 00:02:36.124 LIB libspdk_keyring_file.a 00:02:36.124 SO libspdk_keyring_linux.so.1.0 00:02:36.124 SO libspdk_blob_bdev.so.11.0 00:02:36.124 LIB libspdk_accel_error.a 00:02:36.382 SYMLINK libspdk_scheduler_dynamic.so 00:02:36.382 SO libspdk_keyring_file.so.1.0 00:02:36.382 LIB libspdk_accel_ioat.a 00:02:36.382 SO libspdk_accel_iaa.so.3.0 00:02:36.382 SO libspdk_accel_error.so.2.0 00:02:36.382 SO libspdk_accel_ioat.so.6.0 00:02:36.382 SYMLINK libspdk_keyring_linux.so 00:02:36.382 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:36.382 SYMLINK libspdk_blob_bdev.so 00:02:36.382 SYMLINK libspdk_keyring_file.so 00:02:36.382 SYMLINK libspdk_accel_iaa.so 00:02:36.382 SYMLINK libspdk_accel_error.so 00:02:36.382 SYMLINK libspdk_accel_ioat.so 00:02:36.382 LIB libspdk_accel_dsa.a 00:02:36.382 SO libspdk_accel_dsa.so.5.0 00:02:36.382 CC module/scheduler/gscheduler/gscheduler.o 00:02:36.382 SYMLINK libspdk_accel_dsa.so 00:02:36.382 LIB libspdk_scheduler_dpdk_governor.a 00:02:36.641 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:36.641 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:36.641 LIB libspdk_scheduler_gscheduler.a 00:02:36.641 CC module/bdev/gpt/gpt.o 00:02:36.641 CC module/bdev/lvol/vbdev_lvol.o 00:02:36.641 CC module/bdev/malloc/bdev_malloc.o 00:02:36.641 CC module/bdev/error/vbdev_error.o 00:02:36.641 CC module/bdev/delay/vbdev_delay.o 00:02:36.641 CC module/blobfs/bdev/blobfs_bdev.o 00:02:36.641 SO libspdk_scheduler_gscheduler.so.4.0 00:02:36.641 CC module/bdev/null/bdev_null.o 00:02:36.641 SYMLINK libspdk_scheduler_gscheduler.so 00:02:36.641 CC module/bdev/null/bdev_null_rpc.o 00:02:36.641 LIB libspdk_sock_posix.a 00:02:36.900 CC module/bdev/nvme/bdev_nvme.o 00:02:36.900 SO libspdk_sock_posix.so.6.0 00:02:36.900 CC module/bdev/gpt/vbdev_gpt.o 00:02:36.900 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:36.900 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:36.900 SYMLINK libspdk_sock_posix.so 00:02:36.900 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:36.900 CC module/bdev/error/vbdev_error_rpc.o 00:02:36.900 LIB libspdk_bdev_null.a 00:02:36.900 SO libspdk_bdev_null.so.6.0 00:02:36.900 LIB libspdk_blobfs_bdev.a 00:02:36.900 SO libspdk_blobfs_bdev.so.6.0 00:02:36.900 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:37.159 SYMLINK libspdk_bdev_null.so 00:02:37.159 LIB libspdk_bdev_error.a 00:02:37.159 SYMLINK libspdk_blobfs_bdev.so 00:02:37.159 SO libspdk_bdev_error.so.6.0 00:02:37.159 LIB libspdk_bdev_malloc.a 00:02:37.159 LIB libspdk_bdev_gpt.a 00:02:37.159 SO libspdk_bdev_malloc.so.6.0 00:02:37.159 SO libspdk_bdev_gpt.so.6.0 00:02:37.159 SYMLINK libspdk_bdev_error.so 00:02:37.159 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:37.159 LIB libspdk_bdev_delay.a 00:02:37.159 CC module/bdev/passthru/vbdev_passthru.o 00:02:37.159 SYMLINK libspdk_bdev_malloc.so 00:02:37.159 SYMLINK libspdk_bdev_gpt.so 00:02:37.159 SO libspdk_bdev_delay.so.6.0 00:02:37.159 CC module/bdev/raid/bdev_raid.o 00:02:37.159 CC module/bdev/split/vbdev_split.o 00:02:37.159 LIB libspdk_bdev_lvol.a 00:02:37.418 SYMLINK libspdk_bdev_delay.so 00:02:37.418 SO libspdk_bdev_lvol.so.6.0 00:02:37.418 CC module/bdev/raid/bdev_raid_rpc.o 00:02:37.418 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:37.418 CC module/bdev/aio/bdev_aio.o 00:02:37.418 CC module/bdev/ftl/bdev_ftl.o 00:02:37.418 SYMLINK libspdk_bdev_lvol.so 00:02:37.418 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:37.418 CC module/bdev/split/vbdev_split_rpc.o 00:02:37.418 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:37.676 LIB libspdk_bdev_split.a 00:02:37.676 LIB libspdk_bdev_ftl.a 00:02:37.676 LIB libspdk_bdev_passthru.a 00:02:37.676 SO libspdk_bdev_split.so.6.0 00:02:37.676 CC module/bdev/iscsi/bdev_iscsi.o 00:02:37.676 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:37.676 SO libspdk_bdev_ftl.so.6.0 00:02:37.676 CC module/bdev/aio/bdev_aio_rpc.o 00:02:37.676 SO libspdk_bdev_passthru.so.6.0 00:02:37.676 SYMLINK libspdk_bdev_split.so 00:02:37.676 SYMLINK libspdk_bdev_ftl.so 00:02:37.676 CC module/bdev/raid/bdev_raid_sb.o 00:02:37.935 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:37.935 SYMLINK libspdk_bdev_passthru.so 00:02:37.935 CC module/bdev/nvme/nvme_rpc.o 00:02:37.935 CC module/bdev/nvme/bdev_mdns_client.o 00:02:37.935 LIB libspdk_bdev_aio.a 00:02:37.935 LIB libspdk_bdev_zone_block.a 00:02:37.935 SO libspdk_bdev_aio.so.6.0 00:02:37.935 SO libspdk_bdev_zone_block.so.6.0 00:02:37.935 SYMLINK libspdk_bdev_aio.so 00:02:37.935 CC module/bdev/nvme/vbdev_opal.o 00:02:37.935 SYMLINK libspdk_bdev_zone_block.so 00:02:37.935 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:37.935 CC module/bdev/rbd/bdev_rbd.o 00:02:38.194 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:38.194 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:38.194 CC module/bdev/raid/raid0.o 00:02:38.194 CC module/bdev/rbd/bdev_rbd_rpc.o 00:02:38.194 LIB libspdk_bdev_iscsi.a 00:02:38.194 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:38.194 CC module/bdev/raid/raid1.o 00:02:38.194 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:38.194 SO libspdk_bdev_iscsi.so.6.0 00:02:38.194 CC module/bdev/raid/concat.o 00:02:38.465 SYMLINK libspdk_bdev_iscsi.so 00:02:38.465 LIB libspdk_bdev_rbd.a 00:02:38.465 LIB libspdk_bdev_virtio.a 00:02:38.465 LIB libspdk_bdev_raid.a 00:02:38.465 SO libspdk_bdev_rbd.so.7.0 00:02:38.465 SO libspdk_bdev_virtio.so.6.0 00:02:38.739 SO libspdk_bdev_raid.so.6.0 00:02:38.739 SYMLINK libspdk_bdev_rbd.so 00:02:38.739 SYMLINK libspdk_bdev_virtio.so 00:02:38.739 SYMLINK libspdk_bdev_raid.so 00:02:39.306 LIB libspdk_bdev_nvme.a 00:02:39.306 SO libspdk_bdev_nvme.so.7.0 00:02:39.565 SYMLINK libspdk_bdev_nvme.so 00:02:40.133 CC module/event/subsystems/scheduler/scheduler.o 00:02:40.133 CC module/event/subsystems/sock/sock.o 00:02:40.133 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:40.133 CC module/event/subsystems/iobuf/iobuf.o 00:02:40.133 CC module/event/subsystems/vmd/vmd.o 00:02:40.133 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:40.133 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:40.133 CC module/event/subsystems/keyring/keyring.o 00:02:40.133 LIB libspdk_event_vhost_blk.a 00:02:40.133 LIB libspdk_event_scheduler.a 00:02:40.133 SO libspdk_event_vhost_blk.so.3.0 00:02:40.133 LIB libspdk_event_keyring.a 00:02:40.133 SO libspdk_event_scheduler.so.4.0 00:02:40.133 LIB libspdk_event_sock.a 00:02:40.133 LIB libspdk_event_vmd.a 00:02:40.133 SO libspdk_event_keyring.so.1.0 00:02:40.133 LIB libspdk_event_iobuf.a 00:02:40.133 SO libspdk_event_sock.so.5.0 00:02:40.133 SYMLINK libspdk_event_vhost_blk.so 00:02:40.133 SO libspdk_event_vmd.so.6.0 00:02:40.133 SO libspdk_event_iobuf.so.3.0 00:02:40.133 SYMLINK libspdk_event_scheduler.so 00:02:40.133 SYMLINK libspdk_event_keyring.so 00:02:40.133 SYMLINK libspdk_event_sock.so 00:02:40.392 SYMLINK libspdk_event_vmd.so 00:02:40.392 SYMLINK libspdk_event_iobuf.so 00:02:40.651 CC module/event/subsystems/accel/accel.o 00:02:40.651 LIB libspdk_event_accel.a 00:02:40.910 SO libspdk_event_accel.so.6.0 00:02:40.910 SYMLINK libspdk_event_accel.so 00:02:41.169 CC module/event/subsystems/bdev/bdev.o 00:02:41.427 LIB libspdk_event_bdev.a 00:02:41.427 SO libspdk_event_bdev.so.6.0 00:02:41.427 SYMLINK libspdk_event_bdev.so 00:02:41.686 CC module/event/subsystems/ublk/ublk.o 00:02:41.686 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:41.686 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:41.686 CC module/event/subsystems/scsi/scsi.o 00:02:41.686 CC module/event/subsystems/nbd/nbd.o 00:02:41.945 LIB libspdk_event_ublk.a 00:02:41.945 LIB libspdk_event_nbd.a 00:02:41.945 LIB libspdk_event_scsi.a 00:02:41.945 SO libspdk_event_ublk.so.3.0 00:02:41.945 SO libspdk_event_nbd.so.6.0 00:02:41.945 SO libspdk_event_scsi.so.6.0 00:02:41.945 LIB libspdk_event_nvmf.a 00:02:41.945 SYMLINK libspdk_event_ublk.so 00:02:41.945 SYMLINK libspdk_event_nbd.so 00:02:41.945 SYMLINK libspdk_event_scsi.so 00:02:41.945 SO libspdk_event_nvmf.so.6.0 00:02:41.945 SYMLINK libspdk_event_nvmf.so 00:02:42.204 CC module/event/subsystems/iscsi/iscsi.o 00:02:42.204 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:42.463 LIB libspdk_event_vhost_scsi.a 00:02:42.463 LIB libspdk_event_iscsi.a 00:02:42.463 SO libspdk_event_vhost_scsi.so.3.0 00:02:42.463 SO libspdk_event_iscsi.so.6.0 00:02:42.463 SYMLINK libspdk_event_vhost_scsi.so 00:02:42.463 SYMLINK libspdk_event_iscsi.so 00:02:42.722 SO libspdk.so.6.0 00:02:42.722 SYMLINK libspdk.so 00:02:42.981 CC test/rpc_client/rpc_client_test.o 00:02:42.981 TEST_HEADER include/spdk/accel.h 00:02:42.981 CXX app/trace/trace.o 00:02:42.981 TEST_HEADER include/spdk/accel_module.h 00:02:42.981 TEST_HEADER include/spdk/assert.h 00:02:42.981 TEST_HEADER include/spdk/barrier.h 00:02:42.981 TEST_HEADER include/spdk/base64.h 00:02:42.981 TEST_HEADER include/spdk/bdev.h 00:02:42.981 TEST_HEADER include/spdk/bdev_module.h 00:02:42.981 TEST_HEADER include/spdk/bdev_zone.h 00:02:42.981 TEST_HEADER include/spdk/bit_array.h 00:02:42.981 TEST_HEADER include/spdk/bit_pool.h 00:02:42.981 TEST_HEADER include/spdk/blob_bdev.h 00:02:42.981 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:42.981 TEST_HEADER include/spdk/blobfs.h 00:02:42.981 TEST_HEADER include/spdk/blob.h 00:02:42.981 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:42.981 TEST_HEADER include/spdk/conf.h 00:02:42.981 TEST_HEADER include/spdk/config.h 00:02:42.981 TEST_HEADER include/spdk/cpuset.h 00:02:42.981 TEST_HEADER include/spdk/crc16.h 00:02:42.981 TEST_HEADER include/spdk/crc32.h 00:02:42.981 TEST_HEADER include/spdk/crc64.h 00:02:42.981 TEST_HEADER include/spdk/dif.h 00:02:42.981 TEST_HEADER include/spdk/dma.h 00:02:42.981 TEST_HEADER include/spdk/endian.h 00:02:42.981 TEST_HEADER include/spdk/env_dpdk.h 00:02:42.981 TEST_HEADER include/spdk/env.h 00:02:42.981 TEST_HEADER include/spdk/event.h 00:02:42.981 TEST_HEADER include/spdk/fd_group.h 00:02:42.981 TEST_HEADER include/spdk/fd.h 00:02:42.981 TEST_HEADER include/spdk/file.h 00:02:42.981 TEST_HEADER include/spdk/ftl.h 00:02:42.981 TEST_HEADER include/spdk/gpt_spec.h 00:02:42.982 TEST_HEADER include/spdk/hexlify.h 00:02:42.982 CC test/thread/poller_perf/poller_perf.o 00:02:42.982 TEST_HEADER include/spdk/histogram_data.h 00:02:42.982 TEST_HEADER include/spdk/idxd.h 00:02:43.241 TEST_HEADER include/spdk/idxd_spec.h 00:02:43.241 CC examples/ioat/perf/perf.o 00:02:43.241 TEST_HEADER include/spdk/init.h 00:02:43.241 CC examples/util/zipf/zipf.o 00:02:43.241 TEST_HEADER include/spdk/ioat.h 00:02:43.241 TEST_HEADER include/spdk/ioat_spec.h 00:02:43.241 TEST_HEADER include/spdk/iscsi_spec.h 00:02:43.241 TEST_HEADER include/spdk/json.h 00:02:43.241 TEST_HEADER include/spdk/jsonrpc.h 00:02:43.241 TEST_HEADER include/spdk/keyring.h 00:02:43.241 CC test/dma/test_dma/test_dma.o 00:02:43.241 TEST_HEADER include/spdk/keyring_module.h 00:02:43.241 TEST_HEADER include/spdk/likely.h 00:02:43.241 TEST_HEADER include/spdk/log.h 00:02:43.241 CC test/app/bdev_svc/bdev_svc.o 00:02:43.241 TEST_HEADER include/spdk/lvol.h 00:02:43.241 TEST_HEADER include/spdk/memory.h 00:02:43.241 TEST_HEADER include/spdk/mmio.h 00:02:43.241 TEST_HEADER include/spdk/nbd.h 00:02:43.241 TEST_HEADER include/spdk/net.h 00:02:43.241 TEST_HEADER include/spdk/notify.h 00:02:43.241 TEST_HEADER include/spdk/nvme.h 00:02:43.241 TEST_HEADER include/spdk/nvme_intel.h 00:02:43.241 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:43.241 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:43.241 TEST_HEADER include/spdk/nvme_spec.h 00:02:43.241 TEST_HEADER include/spdk/nvme_zns.h 00:02:43.241 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:43.241 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:43.241 TEST_HEADER include/spdk/nvmf.h 00:02:43.241 TEST_HEADER include/spdk/nvmf_spec.h 00:02:43.241 TEST_HEADER include/spdk/nvmf_transport.h 00:02:43.241 TEST_HEADER include/spdk/opal.h 00:02:43.241 TEST_HEADER include/spdk/opal_spec.h 00:02:43.241 TEST_HEADER include/spdk/pci_ids.h 00:02:43.241 TEST_HEADER include/spdk/pipe.h 00:02:43.241 CC test/env/mem_callbacks/mem_callbacks.o 00:02:43.241 TEST_HEADER include/spdk/queue.h 00:02:43.241 TEST_HEADER include/spdk/reduce.h 00:02:43.241 TEST_HEADER include/spdk/rpc.h 00:02:43.241 TEST_HEADER include/spdk/scheduler.h 00:02:43.241 TEST_HEADER include/spdk/scsi.h 00:02:43.241 LINK rpc_client_test 00:02:43.241 TEST_HEADER include/spdk/scsi_spec.h 00:02:43.241 TEST_HEADER include/spdk/sock.h 00:02:43.241 TEST_HEADER include/spdk/stdinc.h 00:02:43.241 TEST_HEADER include/spdk/string.h 00:02:43.241 TEST_HEADER include/spdk/thread.h 00:02:43.241 TEST_HEADER include/spdk/trace.h 00:02:43.241 TEST_HEADER include/spdk/trace_parser.h 00:02:43.241 TEST_HEADER include/spdk/tree.h 00:02:43.241 TEST_HEADER include/spdk/ublk.h 00:02:43.241 TEST_HEADER include/spdk/util.h 00:02:43.241 TEST_HEADER include/spdk/uuid.h 00:02:43.241 TEST_HEADER include/spdk/version.h 00:02:43.241 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:43.241 LINK interrupt_tgt 00:02:43.241 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:43.241 TEST_HEADER include/spdk/vhost.h 00:02:43.241 TEST_HEADER include/spdk/vmd.h 00:02:43.241 TEST_HEADER include/spdk/xor.h 00:02:43.241 TEST_HEADER include/spdk/zipf.h 00:02:43.241 LINK poller_perf 00:02:43.241 CXX test/cpp_headers/accel.o 00:02:43.241 LINK zipf 00:02:43.241 LINK bdev_svc 00:02:43.500 LINK ioat_perf 00:02:43.500 CXX test/cpp_headers/accel_module.o 00:02:43.500 CXX test/cpp_headers/assert.o 00:02:43.500 CC app/trace_record/trace_record.o 00:02:43.500 LINK spdk_trace 00:02:43.500 CC app/nvmf_tgt/nvmf_main.o 00:02:43.500 LINK test_dma 00:02:43.500 CC examples/ioat/verify/verify.o 00:02:43.500 CC test/event/event_perf/event_perf.o 00:02:43.759 CXX test/cpp_headers/barrier.o 00:02:43.759 CC test/app/histogram_perf/histogram_perf.o 00:02:43.759 LINK spdk_trace_record 00:02:43.759 LINK event_perf 00:02:43.759 LINK nvmf_tgt 00:02:43.759 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:43.759 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:43.759 CXX test/cpp_headers/base64.o 00:02:43.759 CXX test/cpp_headers/bdev.o 00:02:43.759 LINK verify 00:02:43.759 LINK mem_callbacks 00:02:43.759 LINK histogram_perf 00:02:44.018 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:44.018 CC test/event/reactor/reactor.o 00:02:44.018 CXX test/cpp_headers/bdev_module.o 00:02:44.018 CXX test/cpp_headers/bdev_zone.o 00:02:44.018 CC test/env/vtophys/vtophys.o 00:02:44.018 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:44.018 CC app/iscsi_tgt/iscsi_tgt.o 00:02:44.018 LINK reactor 00:02:44.018 CC examples/thread/thread/thread_ex.o 00:02:44.018 CC examples/sock/hello_world/hello_sock.o 00:02:44.277 LINK vtophys 00:02:44.277 CXX test/cpp_headers/bit_array.o 00:02:44.277 LINK nvme_fuzz 00:02:44.277 LINK iscsi_tgt 00:02:44.277 CC test/event/reactor_perf/reactor_perf.o 00:02:44.277 CXX test/cpp_headers/bit_pool.o 00:02:44.277 CC test/accel/dif/dif.o 00:02:44.277 LINK thread 00:02:44.277 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:44.536 LINK hello_sock 00:02:44.536 CC test/env/memory/memory_ut.o 00:02:44.536 LINK reactor_perf 00:02:44.536 LINK vhost_fuzz 00:02:44.536 CXX test/cpp_headers/blob_bdev.o 00:02:44.536 LINK env_dpdk_post_init 00:02:44.536 CC app/spdk_lspci/spdk_lspci.o 00:02:44.794 CC app/spdk_tgt/spdk_tgt.o 00:02:44.794 CC test/event/app_repeat/app_repeat.o 00:02:44.794 CXX test/cpp_headers/blobfs_bdev.o 00:02:44.794 CC test/event/scheduler/scheduler.o 00:02:44.794 LINK spdk_lspci 00:02:44.794 CC examples/vmd/led/led.o 00:02:44.794 CC examples/vmd/lsvmd/lsvmd.o 00:02:44.794 LINK app_repeat 00:02:44.794 LINK dif 00:02:44.794 LINK spdk_tgt 00:02:44.794 CXX test/cpp_headers/blobfs.o 00:02:45.053 LINK lsvmd 00:02:45.053 LINK led 00:02:45.053 LINK scheduler 00:02:45.053 CC app/spdk_nvme_perf/perf.o 00:02:45.053 CXX test/cpp_headers/blob.o 00:02:45.053 CXX test/cpp_headers/conf.o 00:02:45.053 CC app/spdk_nvme_identify/identify.o 00:02:45.053 CXX test/cpp_headers/config.o 00:02:45.053 CXX test/cpp_headers/cpuset.o 00:02:45.312 CC examples/idxd/perf/perf.o 00:02:45.312 CXX test/cpp_headers/crc16.o 00:02:45.312 CC app/spdk_nvme_discover/discovery_aer.o 00:02:45.312 CC app/spdk_top/spdk_top.o 00:02:45.312 CC test/blobfs/mkfs/mkfs.o 00:02:45.312 CXX test/cpp_headers/crc32.o 00:02:45.571 CC test/lvol/esnap/esnap.o 00:02:45.571 LINK spdk_nvme_discover 00:02:45.571 LINK mkfs 00:02:45.571 CXX test/cpp_headers/crc64.o 00:02:45.571 LINK idxd_perf 00:02:45.571 CXX test/cpp_headers/dif.o 00:02:45.571 LINK iscsi_fuzz 00:02:45.830 LINK memory_ut 00:02:45.830 CXX test/cpp_headers/dma.o 00:02:45.830 CC examples/accel/perf/accel_perf.o 00:02:45.830 LINK spdk_nvme_perf 00:02:45.830 CC test/nvme/aer/aer.o 00:02:45.830 CC examples/blob/hello_world/hello_blob.o 00:02:46.089 CC test/env/pci/pci_ut.o 00:02:46.089 LINK spdk_nvme_identify 00:02:46.089 CC test/app/jsoncat/jsoncat.o 00:02:46.089 CXX test/cpp_headers/endian.o 00:02:46.089 LINK jsoncat 00:02:46.089 CXX test/cpp_headers/env_dpdk.o 00:02:46.089 LINK hello_blob 00:02:46.089 CC examples/blob/cli/blobcli.o 00:02:46.348 LINK aer 00:02:46.348 LINK spdk_top 00:02:46.348 CXX test/cpp_headers/env.o 00:02:46.348 CC test/app/stub/stub.o 00:02:46.348 CC test/bdev/bdevio/bdevio.o 00:02:46.348 CXX test/cpp_headers/event.o 00:02:46.348 LINK pci_ut 00:02:46.348 LINK accel_perf 00:02:46.348 CC test/nvme/reset/reset.o 00:02:46.607 LINK stub 00:02:46.607 CXX test/cpp_headers/fd_group.o 00:02:46.607 CC app/vhost/vhost.o 00:02:46.607 CC examples/nvme/hello_world/hello_world.o 00:02:46.607 CC examples/nvme/reconnect/reconnect.o 00:02:46.607 CXX test/cpp_headers/fd.o 00:02:46.607 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:46.607 LINK blobcli 00:02:46.866 CC examples/nvme/arbitration/arbitration.o 00:02:46.866 LINK reset 00:02:46.866 LINK bdevio 00:02:46.866 LINK vhost 00:02:46.866 CXX test/cpp_headers/file.o 00:02:46.866 LINK hello_world 00:02:46.866 CC test/nvme/sgl/sgl.o 00:02:47.126 CXX test/cpp_headers/ftl.o 00:02:47.126 CC test/nvme/e2edp/nvme_dp.o 00:02:47.126 LINK reconnect 00:02:47.126 CC test/nvme/overhead/overhead.o 00:02:47.126 LINK arbitration 00:02:47.126 CC app/spdk_dd/spdk_dd.o 00:02:47.126 CC app/fio/nvme/fio_plugin.o 00:02:47.126 CXX test/cpp_headers/gpt_spec.o 00:02:47.386 LINK nvme_manage 00:02:47.386 LINK sgl 00:02:47.386 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:47.386 LINK nvme_dp 00:02:47.386 CC examples/nvme/hotplug/hotplug.o 00:02:47.386 CXX test/cpp_headers/hexlify.o 00:02:47.386 LINK overhead 00:02:47.386 CXX test/cpp_headers/histogram_data.o 00:02:47.645 LINK spdk_dd 00:02:47.645 LINK cmb_copy 00:02:47.645 CC examples/nvme/abort/abort.o 00:02:47.645 CC test/nvme/err_injection/err_injection.o 00:02:47.645 CXX test/cpp_headers/idxd.o 00:02:47.645 LINK hotplug 00:02:47.645 CC test/nvme/startup/startup.o 00:02:47.645 CC app/fio/bdev/fio_plugin.o 00:02:47.904 LINK err_injection 00:02:47.904 CC test/nvme/reserve/reserve.o 00:02:47.904 CXX test/cpp_headers/idxd_spec.o 00:02:47.904 CC test/nvme/simple_copy/simple_copy.o 00:02:47.904 LINK spdk_nvme 00:02:47.904 CC test/nvme/connect_stress/connect_stress.o 00:02:47.904 LINK startup 00:02:47.904 CXX test/cpp_headers/init.o 00:02:47.904 CXX test/cpp_headers/ioat.o 00:02:47.904 LINK abort 00:02:48.163 LINK reserve 00:02:48.163 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:48.163 LINK connect_stress 00:02:48.163 CXX test/cpp_headers/ioat_spec.o 00:02:48.163 LINK simple_copy 00:02:48.163 CXX test/cpp_headers/iscsi_spec.o 00:02:48.163 CC test/nvme/boot_partition/boot_partition.o 00:02:48.163 LINK pmr_persistence 00:02:48.163 LINK spdk_bdev 00:02:48.163 CXX test/cpp_headers/json.o 00:02:48.423 CC test/nvme/compliance/nvme_compliance.o 00:02:48.423 CC test/nvme/fused_ordering/fused_ordering.o 00:02:48.423 LINK boot_partition 00:02:48.423 CC test/nvme/fdp/fdp.o 00:02:48.423 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:48.423 CC examples/bdev/hello_world/hello_bdev.o 00:02:48.423 CXX test/cpp_headers/jsonrpc.o 00:02:48.423 CC test/nvme/cuse/cuse.o 00:02:48.423 CXX test/cpp_headers/keyring.o 00:02:48.423 CC examples/bdev/bdevperf/bdevperf.o 00:02:48.423 LINK fused_ordering 00:02:48.423 LINK doorbell_aers 00:02:48.682 CXX test/cpp_headers/keyring_module.o 00:02:48.682 LINK hello_bdev 00:02:48.682 CXX test/cpp_headers/likely.o 00:02:48.682 CXX test/cpp_headers/log.o 00:02:48.682 CXX test/cpp_headers/lvol.o 00:02:48.682 LINK nvme_compliance 00:02:48.682 LINK fdp 00:02:48.682 CXX test/cpp_headers/memory.o 00:02:48.682 CXX test/cpp_headers/mmio.o 00:02:48.682 CXX test/cpp_headers/nbd.o 00:02:48.682 CXX test/cpp_headers/net.o 00:02:48.941 CXX test/cpp_headers/notify.o 00:02:48.941 CXX test/cpp_headers/nvme.o 00:02:48.941 CXX test/cpp_headers/nvme_intel.o 00:02:48.941 CXX test/cpp_headers/nvme_ocssd.o 00:02:48.941 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:48.941 CXX test/cpp_headers/nvme_spec.o 00:02:48.941 CXX test/cpp_headers/nvme_zns.o 00:02:48.941 CXX test/cpp_headers/nvmf_cmd.o 00:02:48.941 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:48.941 CXX test/cpp_headers/nvmf.o 00:02:48.941 CXX test/cpp_headers/nvmf_spec.o 00:02:48.941 CXX test/cpp_headers/nvmf_transport.o 00:02:49.201 CXX test/cpp_headers/opal.o 00:02:49.201 CXX test/cpp_headers/opal_spec.o 00:02:49.201 CXX test/cpp_headers/pci_ids.o 00:02:49.201 CXX test/cpp_headers/pipe.o 00:02:49.201 CXX test/cpp_headers/queue.o 00:02:49.201 CXX test/cpp_headers/reduce.o 00:02:49.201 CXX test/cpp_headers/rpc.o 00:02:49.201 CXX test/cpp_headers/scheduler.o 00:02:49.201 CXX test/cpp_headers/scsi.o 00:02:49.201 CXX test/cpp_headers/scsi_spec.o 00:02:49.201 LINK bdevperf 00:02:49.201 CXX test/cpp_headers/sock.o 00:02:49.460 CXX test/cpp_headers/stdinc.o 00:02:49.460 CXX test/cpp_headers/string.o 00:02:49.460 CXX test/cpp_headers/thread.o 00:02:49.460 CXX test/cpp_headers/trace.o 00:02:49.460 CXX test/cpp_headers/trace_parser.o 00:02:49.460 CXX test/cpp_headers/tree.o 00:02:49.460 CXX test/cpp_headers/ublk.o 00:02:49.460 CXX test/cpp_headers/util.o 00:02:49.460 CXX test/cpp_headers/uuid.o 00:02:49.460 CXX test/cpp_headers/version.o 00:02:49.460 CXX test/cpp_headers/vfio_user_pci.o 00:02:49.460 CXX test/cpp_headers/vfio_user_spec.o 00:02:49.460 CXX test/cpp_headers/vhost.o 00:02:49.719 CXX test/cpp_headers/vmd.o 00:02:49.719 CXX test/cpp_headers/xor.o 00:02:49.719 CXX test/cpp_headers/zipf.o 00:02:49.719 CC examples/nvmf/nvmf/nvmf.o 00:02:49.719 LINK cuse 00:02:49.978 LINK nvmf 00:02:51.882 LINK esnap 00:02:52.140 00:02:52.140 real 1m5.717s 00:02:52.140 user 6m20.653s 00:02:52.140 sys 1m33.370s 00:02:52.140 01:58:00 make -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:02:52.140 01:58:00 make -- common/autotest_common.sh@10 -- $ set +x 00:02:52.140 ************************************ 00:02:52.140 END TEST make 00:02:52.140 ************************************ 00:02:52.140 01:58:00 -- common/autotest_common.sh@1142 -- $ return 0 00:02:52.141 01:58:00 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:52.141 01:58:00 -- pm/common@29 -- $ signal_monitor_resources TERM 00:02:52.141 01:58:00 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:02:52.141 01:58:00 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:52.141 01:58:00 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:02:52.141 01:58:00 -- pm/common@44 -- $ pid=5137 00:02:52.141 01:58:00 -- pm/common@50 -- $ kill -TERM 5137 00:02:52.141 01:58:00 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:52.141 01:58:00 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:02:52.141 01:58:00 -- pm/common@44 -- $ pid=5139 00:02:52.141 01:58:00 -- pm/common@50 -- $ kill -TERM 5139 00:02:52.400 01:58:00 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:02:52.400 01:58:00 -- nvmf/common.sh@7 -- # uname -s 00:02:52.400 01:58:00 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:52.400 01:58:00 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:52.400 01:58:00 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:52.400 01:58:00 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:52.400 01:58:00 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:52.400 01:58:00 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:52.400 01:58:00 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:52.400 01:58:00 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:52.400 01:58:00 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:52.400 01:58:00 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:52.400 01:58:01 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f54a7147-29b2-4915-ad44-5b62f2934558 00:02:52.400 01:58:01 -- nvmf/common.sh@18 -- # NVME_HOSTID=f54a7147-29b2-4915-ad44-5b62f2934558 00:02:52.400 01:58:01 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:52.400 01:58:01 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:52.400 01:58:01 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:02:52.400 01:58:01 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:52.400 01:58:01 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:52.400 01:58:01 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:52.400 01:58:01 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:52.400 01:58:01 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:52.400 01:58:01 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:52.400 01:58:01 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:52.400 01:58:01 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:52.400 01:58:01 -- paths/export.sh@5 -- # export PATH 00:02:52.400 01:58:01 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:52.400 01:58:01 -- nvmf/common.sh@47 -- # : 0 00:02:52.400 01:58:01 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:02:52.400 01:58:01 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:02:52.400 01:58:01 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:52.400 01:58:01 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:52.400 01:58:01 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:52.400 01:58:01 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:02:52.400 01:58:01 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:02:52.400 01:58:01 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:02:52.400 01:58:01 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:52.400 01:58:01 -- spdk/autotest.sh@32 -- # uname -s 00:02:52.400 01:58:01 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:52.400 01:58:01 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:52.400 01:58:01 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:02:52.400 01:58:01 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:02:52.400 01:58:01 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:02:52.400 01:58:01 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:52.400 01:58:01 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:52.400 01:58:01 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:52.400 01:58:01 -- spdk/autotest.sh@48 -- # udevadm_pid=52768 00:02:52.400 01:58:01 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:52.400 01:58:01 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:52.400 01:58:01 -- pm/common@17 -- # local monitor 00:02:52.400 01:58:01 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:52.400 01:58:01 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:52.400 01:58:01 -- pm/common@25 -- # sleep 1 00:02:52.400 01:58:01 -- pm/common@21 -- # date +%s 00:02:52.400 01:58:01 -- pm/common@21 -- # date +%s 00:02:52.400 01:58:01 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1721699881 00:02:52.400 01:58:01 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1721699881 00:02:52.400 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1721699881_collect-vmstat.pm.log 00:02:52.400 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1721699881_collect-cpu-load.pm.log 00:02:53.336 01:58:02 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:53.336 01:58:02 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:02:53.336 01:58:02 -- common/autotest_common.sh@722 -- # xtrace_disable 00:02:53.336 01:58:02 -- common/autotest_common.sh@10 -- # set +x 00:02:53.336 01:58:02 -- spdk/autotest.sh@59 -- # create_test_list 00:02:53.336 01:58:02 -- common/autotest_common.sh@746 -- # xtrace_disable 00:02:53.336 01:58:02 -- common/autotest_common.sh@10 -- # set +x 00:02:53.594 01:58:02 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:02:53.594 01:58:02 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:02:53.594 01:58:02 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:02:53.594 01:58:02 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:02:53.594 01:58:02 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:02:53.594 01:58:02 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:02:53.594 01:58:02 -- common/autotest_common.sh@1455 -- # uname 00:02:53.594 01:58:02 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:02:53.594 01:58:02 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:02:53.594 01:58:02 -- common/autotest_common.sh@1475 -- # uname 00:02:53.594 01:58:02 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:02:53.594 01:58:02 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:02:53.594 01:58:02 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:02:53.595 01:58:02 -- spdk/autotest.sh@72 -- # hash lcov 00:02:53.595 01:58:02 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:02:53.595 01:58:02 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:02:53.595 --rc lcov_branch_coverage=1 00:02:53.595 --rc lcov_function_coverage=1 00:02:53.595 --rc genhtml_branch_coverage=1 00:02:53.595 --rc genhtml_function_coverage=1 00:02:53.595 --rc genhtml_legend=1 00:02:53.595 --rc geninfo_all_blocks=1 00:02:53.595 ' 00:02:53.595 01:58:02 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:02:53.595 --rc lcov_branch_coverage=1 00:02:53.595 --rc lcov_function_coverage=1 00:02:53.595 --rc genhtml_branch_coverage=1 00:02:53.595 --rc genhtml_function_coverage=1 00:02:53.595 --rc genhtml_legend=1 00:02:53.595 --rc geninfo_all_blocks=1 00:02:53.595 ' 00:02:53.595 01:58:02 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:02:53.595 --rc lcov_branch_coverage=1 00:02:53.595 --rc lcov_function_coverage=1 00:02:53.595 --rc genhtml_branch_coverage=1 00:02:53.595 --rc genhtml_function_coverage=1 00:02:53.595 --rc genhtml_legend=1 00:02:53.595 --rc geninfo_all_blocks=1 00:02:53.595 --no-external' 00:02:53.595 01:58:02 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:02:53.595 --rc lcov_branch_coverage=1 00:02:53.595 --rc lcov_function_coverage=1 00:02:53.595 --rc genhtml_branch_coverage=1 00:02:53.595 --rc genhtml_function_coverage=1 00:02:53.595 --rc genhtml_legend=1 00:02:53.595 --rc geninfo_all_blocks=1 00:02:53.595 --no-external' 00:02:53.595 01:58:02 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:02:53.595 lcov: LCOV version 1.14 00:02:53.595 01:58:02 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:03:05.827 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:05.827 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:03:18.031 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno:no functions found 00:03:18.031 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno 00:03:18.031 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:03:18.031 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno 00:03:18.031 /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno:no functions found 00:03:18.031 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno 00:03:18.031 /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno:no functions found 00:03:18.031 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno 00:03:18.031 /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno:no functions found 00:03:18.031 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno 00:03:18.031 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno:no functions found 00:03:18.031 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno 00:03:18.031 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:03:18.031 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno 00:03:18.031 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:03:18.031 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno 00:03:18.031 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:03:18.031 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno 00:03:18.031 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:03:18.031 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno 00:03:18.031 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:03:18.031 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno 00:03:18.031 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:03:18.031 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno 00:03:18.031 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:03:18.031 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno 00:03:18.032 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno:no functions found 00:03:18.032 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno 00:03:18.032 /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno:no functions found 00:03:18.032 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno 00:03:18.032 /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno:no functions found 00:03:18.032 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno 00:03:18.032 /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:03:18.032 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno 00:03:18.032 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno:no functions found 00:03:18.032 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno 00:03:18.032 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno:no functions found 00:03:18.032 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno 00:03:18.032 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno:no functions found 00:03:18.032 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno 00:03:18.032 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno:no functions found 00:03:18.032 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno 00:03:18.032 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno:no functions found 00:03:18.032 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno 00:03:18.032 /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno:no functions found 00:03:18.032 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno 00:03:18.032 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:03:18.032 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno 00:03:18.032 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno:no functions found 00:03:18.032 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno 00:03:18.032 /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno:no functions found 00:03:18.032 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno 00:03:18.032 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:03:18.032 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno 00:03:18.032 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno:no functions found 00:03:18.032 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno 00:03:18.032 /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno:no functions found 00:03:18.032 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno 00:03:18.032 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno:no functions found 00:03:18.032 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno 00:03:18.032 /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:03:18.032 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno 00:03:18.032 /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:03:18.032 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno 00:03:18.032 /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:03:18.032 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno 00:03:18.032 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno:no functions found 00:03:18.032 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno 00:03:18.032 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:03:18.032 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno 00:03:18.032 /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno:no functions found 00:03:18.032 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno 00:03:18.032 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno:no functions found 00:03:18.032 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno 00:03:18.032 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:03:18.032 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno 00:03:18.032 /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:03:18.032 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno 00:03:18.032 /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno:no functions found 00:03:18.032 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno 00:03:18.032 /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:03:18.032 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno 00:03:18.032 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno:no functions found 00:03:18.032 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno 00:03:18.032 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:03:18.032 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno 00:03:18.032 /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno:no functions found 00:03:18.032 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno 00:03:18.032 /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno:no functions found 00:03:18.032 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno 00:03:18.032 /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno:no functions found 00:03:18.032 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno 00:03:18.032 /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno:no functions found 00:03:18.032 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno 00:03:18.032 /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno:no functions found 00:03:18.032 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno 00:03:18.032 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno:no functions found 00:03:18.032 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno 00:03:18.032 /home/vagrant/spdk_repo/spdk/test/cpp_headers/net.gcno:no functions found 00:03:18.032 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/net.gcno 00:03:18.032 /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno:no functions found 00:03:18.032 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno 00:03:18.032 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno:no functions found 00:03:18.032 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno 00:03:18.032 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:03:18.032 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno 00:03:18.032 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:03:18.032 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno 00:03:18.032 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:03:18.032 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:03:18.032 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:03:18.032 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno 00:03:18.032 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:03:18.032 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno 00:03:18.032 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:03:18.032 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno 00:03:18.032 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:03:18.032 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:03:18.032 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:03:18.032 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno 00:03:18.032 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:03:18.032 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno 00:03:18.032 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:03:18.032 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno 00:03:18.032 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno:no functions found 00:03:18.032 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno 00:03:18.032 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:03:18.032 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno 00:03:18.032 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:03:18.032 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno 00:03:18.032 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno:no functions found 00:03:18.032 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno 00:03:18.032 /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno:no functions found 00:03:18.032 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno 00:03:18.033 /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno:no functions found 00:03:18.033 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno 00:03:18.033 /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno:no functions found 00:03:18.033 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno 00:03:18.033 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:03:18.033 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno 00:03:18.033 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno:no functions found 00:03:18.033 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno 00:03:18.033 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:03:18.033 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno 00:03:18.033 /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno:no functions found 00:03:18.033 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno 00:03:18.033 /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:03:18.033 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno 00:03:18.033 /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno:no functions found 00:03:18.033 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno 00:03:18.033 /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno:no functions found 00:03:18.033 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno 00:03:18.033 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno:no functions found 00:03:18.033 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno 00:03:18.033 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:03:18.033 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno 00:03:18.033 /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno:no functions found 00:03:18.033 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno 00:03:18.033 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno:no functions found 00:03:18.033 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno 00:03:18.033 /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno:no functions found 00:03:18.033 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno 00:03:18.033 /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno:no functions found 00:03:18.033 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno 00:03:18.033 /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno:no functions found 00:03:18.033 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno 00:03:18.033 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:03:18.033 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno 00:03:18.033 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:03:18.033 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno 00:03:18.033 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno:no functions found 00:03:18.033 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno 00:03:18.033 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno:no functions found 00:03:18.033 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno 00:03:18.033 /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno:no functions found 00:03:18.033 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno 00:03:18.033 /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno:no functions found 00:03:18.033 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno 00:03:19.935 01:58:28 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:03:19.935 01:58:28 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:19.935 01:58:28 -- common/autotest_common.sh@10 -- # set +x 00:03:19.935 01:58:28 -- spdk/autotest.sh@91 -- # rm -f 00:03:19.935 01:58:28 -- spdk/autotest.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:20.502 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:20.502 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:03:20.502 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:03:20.502 01:58:29 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:03:20.502 01:58:29 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:20.502 01:58:29 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:20.502 01:58:29 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:20.502 01:58:29 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:20.502 01:58:29 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:20.502 01:58:29 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:20.502 01:58:29 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:20.502 01:58:29 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:20.502 01:58:29 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:20.502 01:58:29 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:03:20.502 01:58:29 -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:03:20.502 01:58:29 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:03:20.502 01:58:29 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:20.502 01:58:29 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:20.502 01:58:29 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n2 00:03:20.502 01:58:29 -- common/autotest_common.sh@1662 -- # local device=nvme1n2 00:03:20.502 01:58:29 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:03:20.502 01:58:29 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:20.502 01:58:29 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:20.502 01:58:29 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n3 00:03:20.502 01:58:29 -- common/autotest_common.sh@1662 -- # local device=nvme1n3 00:03:20.502 01:58:29 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:03:20.502 01:58:29 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:20.502 01:58:29 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:03:20.502 01:58:29 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:20.502 01:58:29 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:20.502 01:58:29 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:03:20.502 01:58:29 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:03:20.502 01:58:29 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:20.502 No valid GPT data, bailing 00:03:20.502 01:58:29 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:20.502 01:58:29 -- scripts/common.sh@391 -- # pt= 00:03:20.502 01:58:29 -- scripts/common.sh@392 -- # return 1 00:03:20.502 01:58:29 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:20.502 1+0 records in 00:03:20.502 1+0 records out 00:03:20.502 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0055062 s, 190 MB/s 00:03:20.502 01:58:29 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:20.502 01:58:29 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:20.502 01:58:29 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n1 00:03:20.502 01:58:29 -- scripts/common.sh@378 -- # local block=/dev/nvme1n1 pt 00:03:20.503 01:58:29 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:03:20.761 No valid GPT data, bailing 00:03:20.761 01:58:29 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:03:20.761 01:58:29 -- scripts/common.sh@391 -- # pt= 00:03:20.761 01:58:29 -- scripts/common.sh@392 -- # return 1 00:03:20.761 01:58:29 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:03:20.761 1+0 records in 00:03:20.761 1+0 records out 00:03:20.761 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00506643 s, 207 MB/s 00:03:20.761 01:58:29 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:20.761 01:58:29 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:20.761 01:58:29 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n2 00:03:20.761 01:58:29 -- scripts/common.sh@378 -- # local block=/dev/nvme1n2 pt 00:03:20.761 01:58:29 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:03:20.761 No valid GPT data, bailing 00:03:20.761 01:58:29 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:03:20.761 01:58:29 -- scripts/common.sh@391 -- # pt= 00:03:20.761 01:58:29 -- scripts/common.sh@392 -- # return 1 00:03:20.761 01:58:29 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:03:20.761 1+0 records in 00:03:20.761 1+0 records out 00:03:20.761 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00489498 s, 214 MB/s 00:03:20.761 01:58:29 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:20.761 01:58:29 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:20.761 01:58:29 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n3 00:03:20.761 01:58:29 -- scripts/common.sh@378 -- # local block=/dev/nvme1n3 pt 00:03:20.761 01:58:29 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:03:20.761 No valid GPT data, bailing 00:03:20.761 01:58:29 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:03:20.761 01:58:29 -- scripts/common.sh@391 -- # pt= 00:03:20.761 01:58:29 -- scripts/common.sh@392 -- # return 1 00:03:20.761 01:58:29 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:03:20.761 1+0 records in 00:03:20.761 1+0 records out 00:03:20.761 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00479667 s, 219 MB/s 00:03:20.761 01:58:29 -- spdk/autotest.sh@118 -- # sync 00:03:21.020 01:58:29 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:21.020 01:58:29 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:21.020 01:58:29 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:22.925 01:58:31 -- spdk/autotest.sh@124 -- # uname -s 00:03:22.925 01:58:31 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:03:22.925 01:58:31 -- spdk/autotest.sh@125 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:03:22.925 01:58:31 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:22.925 01:58:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:22.925 01:58:31 -- common/autotest_common.sh@10 -- # set +x 00:03:22.925 ************************************ 00:03:22.925 START TEST setup.sh 00:03:22.925 ************************************ 00:03:22.925 01:58:31 setup.sh -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:03:22.925 * Looking for test storage... 00:03:22.925 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:03:22.925 01:58:31 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:03:22.925 01:58:31 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:03:22.925 01:58:31 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:03:22.925 01:58:31 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:22.925 01:58:31 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:22.925 01:58:31 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:22.925 ************************************ 00:03:22.925 START TEST acl 00:03:22.925 ************************************ 00:03:22.925 01:58:31 setup.sh.acl -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:03:23.184 * Looking for test storage... 00:03:23.184 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:03:23.184 01:58:31 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:03:23.184 01:58:31 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:23.184 01:58:31 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:23.184 01:58:31 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:23.184 01:58:31 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:23.184 01:58:31 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:23.184 01:58:31 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:23.184 01:58:31 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:23.184 01:58:31 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:23.184 01:58:31 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:23.184 01:58:31 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:03:23.184 01:58:31 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:03:23.184 01:58:31 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:03:23.184 01:58:31 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:23.184 01:58:31 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:23.184 01:58:31 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n2 00:03:23.184 01:58:31 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme1n2 00:03:23.184 01:58:31 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:03:23.184 01:58:31 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:23.184 01:58:31 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:23.184 01:58:31 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n3 00:03:23.184 01:58:31 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme1n3 00:03:23.184 01:58:31 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:03:23.184 01:58:31 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:23.184 01:58:31 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:03:23.184 01:58:31 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:03:23.184 01:58:31 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:03:23.184 01:58:31 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:03:23.184 01:58:31 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:03:23.184 01:58:31 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:23.184 01:58:31 setup.sh.acl -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:24.119 01:58:32 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:03:24.119 01:58:32 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:03:24.119 01:58:32 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:24.119 01:58:32 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:03:24.119 01:58:32 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:03:24.119 01:58:32 setup.sh.acl -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:03:24.688 01:58:33 setup.sh.acl -- setup/acl.sh@19 -- # [[ (1af4 == *:*:*.* ]] 00:03:24.688 01:58:33 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:24.688 01:58:33 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:24.688 Hugepages 00:03:24.688 node hugesize free / total 00:03:24.688 01:58:33 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:24.688 01:58:33 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:24.688 01:58:33 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:24.688 00:03:24.688 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:24.688 01:58:33 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:24.688 01:58:33 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:24.688 01:58:33 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:24.688 01:58:33 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:03:24.689 01:58:33 setup.sh.acl -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:03:24.689 01:58:33 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:24.689 01:58:33 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:24.689 01:58:33 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:10.0 == *:*:*.* ]] 00:03:24.689 01:58:33 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:24.689 01:58:33 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:03:24.689 01:58:33 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:24.689 01:58:33 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:24.689 01:58:33 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:24.948 01:58:33 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:11.0 == *:*:*.* ]] 00:03:24.948 01:58:33 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:24.948 01:58:33 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:03:24.948 01:58:33 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:24.948 01:58:33 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:24.948 01:58:33 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:24.948 01:58:33 setup.sh.acl -- setup/acl.sh@24 -- # (( 2 > 0 )) 00:03:24.948 01:58:33 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:03:24.948 01:58:33 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:24.948 01:58:33 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:24.948 01:58:33 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:24.948 ************************************ 00:03:24.948 START TEST denied 00:03:24.948 ************************************ 00:03:24.948 01:58:33 setup.sh.acl.denied -- common/autotest_common.sh@1123 -- # denied 00:03:24.948 01:58:33 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:10.0' 00:03:24.948 01:58:33 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:03:24.948 01:58:33 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:10.0' 00:03:24.948 01:58:33 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:03:24.948 01:58:33 setup.sh.acl.denied -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:25.884 0000:00:10.0 (1b36 0010): Skipping denied controller at 0000:00:10.0 00:03:25.884 01:58:34 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:00:10.0 00:03:25.884 01:58:34 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:03:25.884 01:58:34 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:03:25.884 01:58:34 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:10.0 ]] 00:03:25.884 01:58:34 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:10.0/driver 00:03:25.884 01:58:34 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:25.884 01:58:34 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:25.884 01:58:34 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:03:25.884 01:58:34 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:25.884 01:58:34 setup.sh.acl.denied -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:26.452 ************************************ 00:03:26.452 END TEST denied 00:03:26.452 ************************************ 00:03:26.452 00:03:26.452 real 0m1.557s 00:03:26.452 user 0m0.620s 00:03:26.452 sys 0m0.879s 00:03:26.452 01:58:35 setup.sh.acl.denied -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:26.452 01:58:35 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:03:26.452 01:58:35 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:03:26.452 01:58:35 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:03:26.452 01:58:35 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:26.452 01:58:35 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:26.452 01:58:35 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:26.452 ************************************ 00:03:26.452 START TEST allowed 00:03:26.452 ************************************ 00:03:26.452 01:58:35 setup.sh.acl.allowed -- common/autotest_common.sh@1123 -- # allowed 00:03:26.452 01:58:35 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:10.0 00:03:26.452 01:58:35 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:00:10.0 .*: nvme -> .*' 00:03:26.452 01:58:35 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:03:26.452 01:58:35 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:03:26.452 01:58:35 setup.sh.acl.allowed -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:27.388 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:03:27.388 01:58:35 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 0000:00:11.0 00:03:27.388 01:58:35 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:03:27.388 01:58:35 setup.sh.acl.allowed -- setup/acl.sh@30 -- # for dev in "$@" 00:03:27.388 01:58:35 setup.sh.acl.allowed -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:11.0 ]] 00:03:27.388 01:58:35 setup.sh.acl.allowed -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:11.0/driver 00:03:27.388 01:58:35 setup.sh.acl.allowed -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:27.388 01:58:35 setup.sh.acl.allowed -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:27.388 01:58:35 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:03:27.388 01:58:35 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:27.388 01:58:35 setup.sh.acl.allowed -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:28.325 00:03:28.325 real 0m1.641s 00:03:28.325 user 0m0.688s 00:03:28.325 sys 0m0.937s 00:03:28.325 ************************************ 00:03:28.325 END TEST allowed 00:03:28.325 ************************************ 00:03:28.325 01:58:36 setup.sh.acl.allowed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:28.325 01:58:36 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:03:28.325 01:58:36 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:03:28.325 ************************************ 00:03:28.325 END TEST acl 00:03:28.325 ************************************ 00:03:28.325 00:03:28.325 real 0m5.123s 00:03:28.325 user 0m2.204s 00:03:28.325 sys 0m2.838s 00:03:28.325 01:58:36 setup.sh.acl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:28.325 01:58:36 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:28.325 01:58:36 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:03:28.325 01:58:36 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:03:28.325 01:58:36 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:28.325 01:58:36 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:28.325 01:58:36 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:28.325 ************************************ 00:03:28.325 START TEST hugepages 00:03:28.325 ************************************ 00:03:28.325 01:58:36 setup.sh.hugepages -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:03:28.325 * Looking for test storage... 00:03:28.325 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:03:28.325 01:58:36 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:03:28.325 01:58:36 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:03:28.325 01:58:36 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:03:28.325 01:58:36 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:03:28.325 01:58:36 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:03:28.325 01:58:36 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:03:28.325 01:58:36 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:03:28.325 01:58:36 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:03:28.325 01:58:36 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:03:28.325 01:58:36 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:03:28.325 01:58:36 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:28.325 01:58:36 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:28.325 01:58:36 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:28.325 01:58:36 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:03:28.325 01:58:36 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:28.325 01:58:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:28.325 01:58:36 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 5846976 kB' 'MemAvailable: 7402804 kB' 'Buffers: 2436 kB' 'Cached: 1769900 kB' 'SwapCached: 0 kB' 'Active: 434944 kB' 'Inactive: 1441764 kB' 'Active(anon): 114860 kB' 'Inactive(anon): 0 kB' 'Active(file): 320084 kB' 'Inactive(file): 1441764 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 304 kB' 'Writeback: 0 kB' 'AnonPages: 106032 kB' 'Mapped: 48640 kB' 'Shmem: 10488 kB' 'KReclaimable: 61840 kB' 'Slab: 133164 kB' 'SReclaimable: 61840 kB' 'SUnreclaim: 71324 kB' 'KernelStack: 6396 kB' 'PageTables: 4240 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 12412436 kB' 'Committed_AS: 334348 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54708 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 157548 kB' 'DirectMap2M: 5085184 kB' 'DirectMap1G: 9437184 kB' 00:03:28.326 01:58:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:28.326 01:58:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:28.326 01:58:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:28.326 01:58:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:28.326 01:58:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:28.326 01:58:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:28.326 01:58:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:28.326 01:58:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:28.326 01:58:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:28.326 01:58:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:28.326 01:58:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:28.326 01:58:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:28.326 01:58:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:28.326 01:58:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:28.326 01:58:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:28.326 01:58:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:28.326 01:58:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:28.326 01:58:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:28.326 01:58:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:28.326 01:58:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:28.326 01:58:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:28.326 01:58:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:28.326 01:58:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:28.326 01:58:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:28.326 01:58:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:28.326 01:58:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:28.326 01:58:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:28.326 01:58:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:28.326 01:58:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:28.326 01:58:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:28.326 01:58:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:28.326 01:58:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:28.326 01:58:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:28.326 01:58:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:28.326 01:58:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:28.326 01:58:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:28.326 01:58:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:28.326 01:58:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:28.326 01:58:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:28.326 01:58:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:28.326 01:58:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:28.326 01:58:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:28.326 01:58:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:28.326 01:58:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:28.326 01:58:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:28.326 01:58:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:28.326 01:58:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:28.326 01:58:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:28.326 01:58:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:28.326 01:58:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:28.326 01:58:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:28.326 01:58:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:28.326 01:58:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:28.326 01:58:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:28.326 01:58:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:28.326 01:58:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:28.326 01:58:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:28.326 01:58:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:28.326 01:58:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:28.326 01:58:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:28.326 01:58:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:28.326 01:58:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:28.326 01:58:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:28.326 01:58:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:28.326 01:58:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:28.326 01:58:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:28.326 01:58:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:28.326 01:58:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:28.326 01:58:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:28.326 01:58:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:28.326 01:58:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:28.326 01:58:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:28.326 01:58:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:28.326 01:58:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:28.326 01:58:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:28.326 01:58:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:28.326 01:58:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:28.326 01:58:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:28.326 01:58:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:28.326 01:58:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:28.326 01:58:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:28.326 01:58:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:28.326 01:58:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:28.326 01:58:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:28.326 01:58:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:28.326 01:58:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:28.326 01:58:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:28.326 01:58:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:28.326 01:58:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:28.326 01:58:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:28.326 01:58:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:28.326 01:58:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:28.326 01:58:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:28.326 01:58:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:28.326 01:58:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:28.326 01:58:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:28.326 01:58:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:28.326 01:58:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:28.326 01:58:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:28.326 01:58:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:28.326 01:58:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:28.326 01:58:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:28.326 01:58:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:28.326 01:58:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:28.326 01:58:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:28.326 01:58:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:28.326 01:58:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:28.326 01:58:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:28.326 01:58:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:28.326 01:58:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:28.326 01:58:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:28.326 01:58:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:28.326 01:58:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:28.326 01:58:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:28.326 01:58:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:28.326 01:58:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:28.326 01:58:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:28.326 01:58:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:28.326 01:58:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:28.326 01:58:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:28.326 01:58:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:28.326 01:58:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:28.326 01:58:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:28.326 01:58:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:28.326 01:58:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:28.326 01:58:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:28.326 01:58:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:28.326 01:58:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:28.326 01:58:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:28.326 01:58:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:28.326 01:58:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:28.326 01:58:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:28.326 01:58:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:28.326 01:58:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:28.327 01:58:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:28.327 01:58:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:28.327 01:58:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:28.327 01:58:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:28.327 01:58:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:28.327 01:58:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:28.327 01:58:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:28.327 01:58:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:28.327 01:58:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:28.327 01:58:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:28.327 01:58:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:28.327 01:58:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:28.327 01:58:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:28.327 01:58:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:28.327 01:58:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:28.327 01:58:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:28.327 01:58:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:28.327 01:58:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:28.327 01:58:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:28.327 01:58:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:28.327 01:58:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:28.327 01:58:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:28.327 01:58:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:28.327 01:58:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:28.327 01:58:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:28.327 01:58:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:28.327 01:58:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:28.327 01:58:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:28.327 01:58:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:28.327 01:58:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:28.327 01:58:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:28.327 01:58:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:28.327 01:58:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:28.327 01:58:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:28.327 01:58:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:28.327 01:58:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:28.327 01:58:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:28.327 01:58:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:28.327 01:58:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:28.327 01:58:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:28.327 01:58:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:28.327 01:58:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:28.327 01:58:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:28.327 01:58:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:28.327 01:58:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:28.327 01:58:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:28.327 01:58:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:28.327 01:58:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:28.327 01:58:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:28.327 01:58:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:28.327 01:58:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:28.327 01:58:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:28.327 01:58:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:28.327 01:58:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:28.327 01:58:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:28.327 01:58:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:28.327 01:58:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:28.327 01:58:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:28.327 01:58:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:28.327 01:58:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:28.327 01:58:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:28.327 01:58:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:28.327 01:58:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:28.327 01:58:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:28.327 01:58:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:28.327 01:58:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:28.327 01:58:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:28.327 01:58:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:28.327 01:58:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:28.327 01:58:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:28.327 01:58:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:28.327 01:58:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:28.327 01:58:36 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:28.327 01:58:36 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:28.327 01:58:36 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:28.327 01:58:36 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:28.327 01:58:36 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:03:28.327 01:58:36 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:03:28.327 01:58:36 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:03:28.327 01:58:36 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:03:28.327 01:58:36 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:03:28.327 01:58:36 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:03:28.327 01:58:36 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:03:28.327 01:58:36 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:03:28.327 01:58:36 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:03:28.327 01:58:36 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:03:28.327 01:58:36 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:03:28.327 01:58:36 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:28.327 01:58:36 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:03:28.327 01:58:36 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:28.327 01:58:36 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:28.327 01:58:36 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:03:28.327 01:58:36 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:28.327 01:58:36 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:28.327 01:58:36 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:28.327 01:58:37 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:28.327 01:58:37 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:28.327 01:58:37 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:28.327 01:58:37 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:28.327 01:58:37 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:28.327 01:58:37 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:03:28.327 01:58:37 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:28.327 01:58:37 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:28.327 01:58:37 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:28.327 ************************************ 00:03:28.327 START TEST default_setup 00:03:28.327 ************************************ 00:03:28.327 01:58:37 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1123 -- # default_setup 00:03:28.327 01:58:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:03:28.327 01:58:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:03:28.328 01:58:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:28.328 01:58:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:03:28.328 01:58:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:28.328 01:58:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:03:28.328 01:58:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:28.328 01:58:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:28.328 01:58:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:28.328 01:58:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:28.328 01:58:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:03:28.328 01:58:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:28.328 01:58:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:28.328 01:58:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:28.328 01:58:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:28.328 01:58:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:28.328 01:58:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:28.328 01:58:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:28.328 01:58:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:03:28.328 01:58:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:03:28.328 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:03:28.328 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:29.268 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:29.268 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:03:29.268 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:03:29.268 01:58:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:03:29.268 01:58:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:03:29.268 01:58:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:03:29.268 01:58:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:03:29.268 01:58:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:03:29.268 01:58:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:03:29.269 01:58:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:03:29.269 01:58:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:29.269 01:58:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:29.269 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:29.269 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:29.269 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:29.269 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:29.269 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:29.269 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:29.269 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:29.269 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:29.269 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:29.269 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:29.269 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:29.269 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7942888 kB' 'MemAvailable: 9498604 kB' 'Buffers: 2436 kB' 'Cached: 1769892 kB' 'SwapCached: 0 kB' 'Active: 451760 kB' 'Inactive: 1441776 kB' 'Active(anon): 131676 kB' 'Inactive(anon): 0 kB' 'Active(file): 320084 kB' 'Inactive(file): 1441776 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'AnonPages: 122836 kB' 'Mapped: 48792 kB' 'Shmem: 10464 kB' 'KReclaimable: 61588 kB' 'Slab: 132768 kB' 'SReclaimable: 61588 kB' 'SUnreclaim: 71180 kB' 'KernelStack: 6336 kB' 'PageTables: 4184 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 351280 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54660 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 157548 kB' 'DirectMap2M: 5085184 kB' 'DirectMap1G: 9437184 kB' 00:03:29.269 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.269 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:29.269 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:29.269 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:29.269 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.269 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:29.269 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:29.269 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:29.269 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.269 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:29.269 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:29.269 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:29.269 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.269 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:29.269 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:29.269 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:29.269 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.269 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:29.269 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:29.269 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:29.269 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.269 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:29.269 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:29.269 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:29.269 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.269 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:29.269 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:29.269 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:29.269 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.269 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:29.269 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:29.269 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:29.269 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.269 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:29.269 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:29.269 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:29.269 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.269 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:29.269 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:29.269 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:29.269 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.269 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:29.269 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:29.269 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:29.269 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.269 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:29.269 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:29.269 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:29.269 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.269 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:29.269 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:29.269 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:29.269 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.269 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:29.269 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:29.269 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:29.269 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.269 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:29.269 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:29.269 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:29.269 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.269 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:29.269 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:29.269 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:29.269 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.269 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:29.269 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:29.269 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:29.269 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.269 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:29.269 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:29.269 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:29.269 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.269 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:29.269 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:29.269 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:29.269 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.269 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:29.269 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:29.269 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:29.269 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.269 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:29.269 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:29.269 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:29.270 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.270 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:29.270 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:29.270 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:29.270 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.270 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:29.270 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:29.270 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:29.270 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.270 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:29.270 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:29.270 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:29.270 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.270 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:29.270 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:29.270 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:29.270 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.270 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:29.270 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:29.270 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:29.270 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.270 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:29.270 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:29.270 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:29.270 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.270 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:29.270 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:29.270 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:29.270 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.270 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:29.270 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:29.270 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:29.270 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.270 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:29.270 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:29.270 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:29.270 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.270 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:29.270 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:29.270 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:29.270 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.270 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:29.270 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:29.270 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:29.270 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.270 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:29.270 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:29.270 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:29.270 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.270 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:29.270 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:29.270 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:29.270 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.270 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:29.270 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:29.270 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:29.270 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.270 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:29.270 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:29.270 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:29.270 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.270 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:29.270 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:29.270 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:29.270 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.270 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:29.270 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:29.270 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:29.270 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.270 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:29.270 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:29.270 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:29.270 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.270 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:29.270 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:29.270 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:29.270 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.270 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:29.270 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:29.270 01:58:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:03:29.270 01:58:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:29.270 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:29.270 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:29.270 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:29.270 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:29.270 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:29.270 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:29.270 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:29.270 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:29.270 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:29.270 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:29.270 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:29.270 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7942888 kB' 'MemAvailable: 9498604 kB' 'Buffers: 2436 kB' 'Cached: 1769892 kB' 'SwapCached: 0 kB' 'Active: 451368 kB' 'Inactive: 1441776 kB' 'Active(anon): 131284 kB' 'Inactive(anon): 0 kB' 'Active(file): 320084 kB' 'Inactive(file): 1441776 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'AnonPages: 122476 kB' 'Mapped: 48664 kB' 'Shmem: 10464 kB' 'KReclaimable: 61588 kB' 'Slab: 132768 kB' 'SReclaimable: 61588 kB' 'SUnreclaim: 71180 kB' 'KernelStack: 6336 kB' 'PageTables: 4188 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 351280 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54660 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 157548 kB' 'DirectMap2M: 5085184 kB' 'DirectMap1G: 9437184 kB' 00:03:29.270 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.270 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:29.270 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:29.270 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:29.270 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.270 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:29.270 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:29.270 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:29.270 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.270 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:29.271 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:29.271 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:29.271 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.271 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:29.271 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:29.271 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:29.271 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.271 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:29.271 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:29.271 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:29.271 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.271 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:29.271 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:29.271 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:29.271 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.271 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:29.271 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:29.271 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:29.271 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.271 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:29.271 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:29.271 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:29.271 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.271 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:29.271 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:29.271 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:29.271 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.271 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:29.271 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:29.271 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:29.271 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.271 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:29.271 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:29.271 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:29.271 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.271 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:29.271 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:29.271 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:29.271 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.271 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:29.271 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:29.271 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:29.271 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.271 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:29.271 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:29.271 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:29.271 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.271 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:29.271 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:29.271 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:29.271 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.271 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:29.271 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:29.271 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:29.271 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.271 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:29.271 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:29.271 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:29.271 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.271 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:29.271 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:29.271 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:29.271 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.271 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:29.271 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:29.271 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:29.271 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.271 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:29.271 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:29.271 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:29.271 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.271 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:29.271 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:29.271 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:29.271 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.271 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:29.271 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:29.271 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:29.271 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.271 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:29.271 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:29.271 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:29.271 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.271 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:29.271 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:29.271 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:29.271 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.271 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:29.271 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:29.271 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:29.271 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.271 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:29.271 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:29.271 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:29.271 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.271 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:29.271 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:29.271 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:29.271 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.271 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:29.271 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:29.271 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:29.271 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.271 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:29.271 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:29.271 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:29.271 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.271 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:29.271 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:29.271 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:29.271 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.271 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:29.271 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:29.272 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:29.272 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.272 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:29.272 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:29.272 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:29.272 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.272 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:29.272 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:29.272 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:29.272 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.272 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:29.272 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:29.272 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:29.272 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.272 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:29.272 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:29.272 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:29.272 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.272 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:29.272 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:29.272 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:29.272 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.272 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:29.272 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:29.272 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:29.272 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.272 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:29.272 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:29.272 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:29.272 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.272 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:29.272 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:29.272 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:29.272 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.272 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:29.272 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:29.272 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:29.272 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.272 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:29.272 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:29.272 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:29.272 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.272 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:29.272 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:29.272 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:29.272 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.272 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:29.272 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:29.272 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:29.272 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.272 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:29.272 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:29.272 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:29.272 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.272 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:29.272 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:29.272 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:29.272 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.272 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:29.272 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:29.272 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:29.272 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.272 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:29.272 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:29.272 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:29.272 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.272 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:29.272 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:29.272 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:29.272 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.272 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:29.272 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:29.272 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:29.272 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.272 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:29.272 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:29.272 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:29.272 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.272 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:29.272 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:29.272 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:29.272 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.272 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:29.272 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:29.272 01:58:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:03:29.272 01:58:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:29.272 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:29.272 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:29.272 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:29.272 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:29.272 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:29.272 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:29.272 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:29.272 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:29.272 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:29.272 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:29.272 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:29.273 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7942888 kB' 'MemAvailable: 9498604 kB' 'Buffers: 2436 kB' 'Cached: 1769892 kB' 'SwapCached: 0 kB' 'Active: 451328 kB' 'Inactive: 1441776 kB' 'Active(anon): 131244 kB' 'Inactive(anon): 0 kB' 'Active(file): 320084 kB' 'Inactive(file): 1441776 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'AnonPages: 122388 kB' 'Mapped: 48664 kB' 'Shmem: 10464 kB' 'KReclaimable: 61588 kB' 'Slab: 132768 kB' 'SReclaimable: 61588 kB' 'SUnreclaim: 71180 kB' 'KernelStack: 6320 kB' 'PageTables: 4136 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 351280 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54676 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 157548 kB' 'DirectMap2M: 5085184 kB' 'DirectMap1G: 9437184 kB' 00:03:29.273 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.273 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:29.273 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:29.273 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:29.273 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.273 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:29.273 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:29.273 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:29.273 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.273 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:29.273 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:29.273 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:29.273 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.273 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:29.273 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:29.273 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:29.273 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.273 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:29.273 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:29.273 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:29.273 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.273 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:29.273 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:29.273 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:29.273 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.273 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:29.273 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:29.273 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:29.273 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.273 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:29.273 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:29.273 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:29.273 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.273 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:29.273 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:29.273 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:29.273 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.273 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:29.273 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:29.273 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:29.273 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.273 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:29.273 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:29.273 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:29.273 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.273 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:29.273 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:29.273 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:29.273 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.273 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:29.273 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:29.273 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:29.273 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.273 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:29.273 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:29.273 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:29.273 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.273 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:29.273 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:29.273 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:29.273 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.273 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:29.273 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:29.273 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:29.273 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.273 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:29.273 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:29.273 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:29.273 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.273 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:29.273 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:29.273 01:58:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:29.273 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.273 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:29.273 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:29.273 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:29.273 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.273 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:29.273 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:29.273 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:29.273 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.273 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:29.273 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:29.273 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:29.273 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.273 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:29.273 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:29.273 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:29.273 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.273 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:29.273 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:29.274 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:29.274 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.274 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:29.274 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:29.274 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:29.274 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.274 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:29.274 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:29.274 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:29.274 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.274 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:29.274 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:29.274 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:29.274 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.274 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:29.274 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:29.274 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:29.274 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.274 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:29.274 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:29.274 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:29.274 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.274 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:29.274 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:29.274 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:29.274 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.274 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:29.274 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:29.274 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:29.274 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.274 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:29.274 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:29.274 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:29.274 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.274 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:29.274 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:29.274 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:29.274 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.274 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:29.274 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:29.274 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:29.274 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.274 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:29.274 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:29.274 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:29.274 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.274 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:29.274 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:29.274 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:29.274 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.274 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:29.274 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:29.274 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:29.274 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.274 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:29.274 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:29.274 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:29.274 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.274 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:29.274 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:29.274 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:29.274 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.274 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:29.274 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:29.274 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:29.274 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.274 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:29.274 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:29.274 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:29.274 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.274 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:29.274 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:29.274 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:29.274 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.274 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:29.274 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:29.274 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:29.274 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.274 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:29.274 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:29.274 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:29.274 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.274 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:29.274 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:29.274 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:29.274 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.274 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:29.274 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:29.274 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:29.274 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.274 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:29.274 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:29.274 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:29.274 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.274 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:29.274 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:29.274 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:29.274 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.274 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:29.274 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:29.274 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:29.274 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.274 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:29.274 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:29.274 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:29.274 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.274 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:29.274 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:29.274 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:29.274 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.274 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:29.274 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:29.274 01:58:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:03:29.274 01:58:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:29.274 nr_hugepages=1024 00:03:29.274 01:58:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:29.274 resv_hugepages=0 00:03:29.275 01:58:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:29.275 surplus_hugepages=0 00:03:29.275 anon_hugepages=0 00:03:29.275 01:58:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:29.275 01:58:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:29.275 01:58:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:29.275 01:58:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:29.275 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:29.275 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:29.275 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:29.275 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:29.275 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:29.275 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:29.275 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:29.275 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:29.275 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:29.275 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:29.275 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:29.275 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7942888 kB' 'MemAvailable: 9498604 kB' 'Buffers: 2436 kB' 'Cached: 1769892 kB' 'SwapCached: 0 kB' 'Active: 451372 kB' 'Inactive: 1441776 kB' 'Active(anon): 131288 kB' 'Inactive(anon): 0 kB' 'Active(file): 320084 kB' 'Inactive(file): 1441776 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'AnonPages: 122456 kB' 'Mapped: 48664 kB' 'Shmem: 10464 kB' 'KReclaimable: 61588 kB' 'Slab: 132768 kB' 'SReclaimable: 61588 kB' 'SUnreclaim: 71180 kB' 'KernelStack: 6320 kB' 'PageTables: 4140 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 351280 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54692 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 157548 kB' 'DirectMap2M: 5085184 kB' 'DirectMap1G: 9437184 kB' 00:03:29.275 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.275 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:29.275 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:29.275 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:29.275 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.275 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:29.275 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:29.275 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:29.275 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.275 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:29.275 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:29.275 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:29.275 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.275 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:29.275 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:29.275 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:29.275 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.275 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:29.275 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:29.275 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:29.275 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.275 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:29.275 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:29.275 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:29.275 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.275 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:29.275 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:29.275 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:29.275 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.275 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:29.275 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:29.275 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:29.275 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.275 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:29.275 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:29.275 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:29.275 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.275 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:29.275 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:29.275 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:29.275 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.275 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:29.275 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:29.275 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:29.275 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.275 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:29.275 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:29.275 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:29.275 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.275 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:29.275 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:29.275 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:29.275 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.275 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:29.275 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:29.275 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:29.275 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.275 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:29.275 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:29.275 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:29.275 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.275 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:29.275 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:29.275 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:29.275 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.275 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:29.275 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:29.275 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:29.275 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.275 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:29.275 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:29.275 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:29.536 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.536 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:29.536 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:29.536 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:29.536 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.536 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:29.536 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:29.536 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:29.536 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.536 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:29.536 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:29.536 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:29.536 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.536 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:29.536 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:29.536 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:29.536 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.536 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:29.536 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:29.536 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:29.536 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.536 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:29.536 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:29.536 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:29.536 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.536 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:29.536 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:29.536 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:29.536 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.536 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:29.537 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:29.537 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:29.537 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.537 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:29.537 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:29.537 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:29.537 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.537 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:29.537 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:29.537 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:29.537 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.537 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:29.537 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:29.537 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:29.537 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.537 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:29.537 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:29.537 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:29.537 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.537 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:29.537 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:29.537 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:29.537 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.537 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:29.537 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:29.537 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:29.537 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.537 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:29.537 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:29.537 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:29.537 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.537 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:29.537 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:29.537 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:29.537 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.537 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:29.537 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:29.537 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:29.537 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.537 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:29.537 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:29.537 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:29.537 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.537 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:29.537 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:29.537 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:29.537 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.537 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:29.537 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:29.537 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:29.537 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.537 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:29.537 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:29.537 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:29.537 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.537 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:29.537 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:29.537 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:29.537 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.537 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:29.537 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:29.537 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:29.537 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.537 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:29.537 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:29.537 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:29.537 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.537 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:29.537 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:29.537 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:29.537 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.537 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:29.537 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:29.537 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:29.537 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.537 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:29.537 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:29.537 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:29.537 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.537 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:29.537 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:29.537 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:29.537 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.537 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:29.537 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:29.537 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:29.537 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.537 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:29.537 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:29.537 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:29.537 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.537 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:03:29.537 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:29.537 01:58:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:29.537 01:58:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:03:29.537 01:58:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:03:29.537 01:58:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:29.537 01:58:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:29.537 01:58:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:29.537 01:58:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:29.537 01:58:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:29.537 01:58:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:29.537 01:58:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:29.537 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:29.537 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:03:29.537 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:29.537 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:29.537 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:29.537 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:29.537 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:29.537 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:29.537 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:29.537 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:29.537 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:29.538 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7942888 kB' 'MemUsed: 4299084 kB' 'SwapCached: 0 kB' 'Active: 451388 kB' 'Inactive: 1441776 kB' 'Active(anon): 131304 kB' 'Inactive(anon): 0 kB' 'Active(file): 320084 kB' 'Inactive(file): 1441776 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'FilePages: 1772328 kB' 'Mapped: 48664 kB' 'AnonPages: 122440 kB' 'Shmem: 10464 kB' 'KernelStack: 6304 kB' 'PageTables: 4092 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 61588 kB' 'Slab: 132768 kB' 'SReclaimable: 61588 kB' 'SUnreclaim: 71180 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:29.538 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.538 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:29.538 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:29.538 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:29.538 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.538 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:29.538 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:29.538 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:29.538 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.538 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:29.538 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:29.538 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:29.538 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.538 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:29.538 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:29.538 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:29.538 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.538 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:29.538 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:29.538 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:29.538 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.538 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:29.538 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:29.538 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:29.538 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.538 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:29.538 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:29.538 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:29.538 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.538 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:29.538 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:29.538 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:29.538 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.538 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:29.538 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:29.538 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:29.538 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.538 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:29.538 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:29.538 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:29.538 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.538 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:29.538 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:29.538 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:29.538 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.538 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:29.538 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:29.538 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:29.538 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.538 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:29.538 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:29.538 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:29.538 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.538 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:29.538 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:29.538 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:29.538 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.538 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:29.538 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:29.538 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:29.538 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.538 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:29.538 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:29.538 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:29.538 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.538 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:29.538 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:29.538 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:29.538 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.538 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:29.538 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:29.538 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:29.538 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.538 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:29.538 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:29.538 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:29.538 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.538 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:29.538 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:29.538 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:29.538 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.538 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:29.538 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:29.538 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:29.538 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.538 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:29.538 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:29.538 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:29.538 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.538 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:29.538 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:29.538 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:29.538 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.538 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:29.538 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:29.538 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:29.538 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.538 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:29.538 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:29.538 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:29.538 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.538 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:29.538 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:29.538 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:29.538 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.538 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:29.538 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:29.538 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:29.538 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.538 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:29.539 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:29.539 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:29.539 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.539 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:29.539 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:29.539 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:29.539 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.539 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:29.539 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:29.539 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:29.539 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.539 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:29.539 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:29.539 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:29.539 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.539 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:29.539 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:29.539 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:29.539 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.539 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:29.539 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:29.539 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:29.539 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.539 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:29.539 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:29.539 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:29.539 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.539 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:29.539 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:29.539 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:29.539 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.539 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:29.539 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:29.539 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:29.539 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.539 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:29.539 01:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:29.539 node0=1024 expecting 1024 00:03:29.539 ************************************ 00:03:29.539 END TEST default_setup 00:03:29.539 ************************************ 00:03:29.539 01:58:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:29.539 01:58:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:29.539 01:58:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:29.539 01:58:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:29.539 01:58:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:29.539 01:58:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:29.539 00:03:29.539 real 0m1.072s 00:03:29.539 user 0m0.485s 00:03:29.539 sys 0m0.499s 00:03:29.539 01:58:38 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:29.539 01:58:38 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:03:29.539 01:58:38 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:29.539 01:58:38 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:03:29.539 01:58:38 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:29.539 01:58:38 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:29.539 01:58:38 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:29.539 ************************************ 00:03:29.539 START TEST per_node_1G_alloc 00:03:29.539 ************************************ 00:03:29.539 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1123 -- # per_node_1G_alloc 00:03:29.539 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:03:29.539 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:03:29.539 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:29.539 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:29.539 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:03:29.539 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:29.539 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:29.539 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:29.539 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:29.539 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:29.539 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:29.539 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:29.539 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:29.539 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:29.539 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:29.539 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:29.539 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:29.539 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:29.539 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:29.539 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:29.539 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:03:29.539 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0 00:03:29.539 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:03:29.539 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:29.539 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:29.798 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:29.798 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:29.798 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:30.063 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:03:30.063 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:03:30.063 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:30.063 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:30.063 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:30.063 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:30.063 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:30.063 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:30.063 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:30.063 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:30.063 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:30.063 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:30.063 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:30.063 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:30.063 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:30.063 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:30.063 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:30.063 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:30.063 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:30.063 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.063 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 9008016 kB' 'MemAvailable: 10563740 kB' 'Buffers: 2436 kB' 'Cached: 1769896 kB' 'SwapCached: 0 kB' 'Active: 451476 kB' 'Inactive: 1441784 kB' 'Active(anon): 131392 kB' 'Inactive(anon): 0 kB' 'Active(file): 320084 kB' 'Inactive(file): 1441784 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 122804 kB' 'Mapped: 48800 kB' 'Shmem: 10464 kB' 'KReclaimable: 61588 kB' 'Slab: 132732 kB' 'SReclaimable: 61588 kB' 'SUnreclaim: 71144 kB' 'KernelStack: 6356 kB' 'PageTables: 4268 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 351280 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54692 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 157548 kB' 'DirectMap2M: 5085184 kB' 'DirectMap1G: 9437184 kB' 00:03:30.063 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.063 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.063 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.063 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.063 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.063 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.063 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.063 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.063 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.063 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.063 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.063 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.063 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.063 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.063 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.063 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.063 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.063 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.063 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.063 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.063 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.063 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.063 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.063 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.063 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.063 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.063 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.063 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.063 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.063 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.063 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.063 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.063 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.063 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.063 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.063 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.063 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.063 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.063 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.063 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.063 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.063 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.063 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.063 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.063 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.063 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.063 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.063 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.063 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.063 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.064 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.064 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.064 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.064 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.064 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.064 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.064 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.064 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.064 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.064 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.064 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.064 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.064 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.064 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.064 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.064 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.064 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.064 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.064 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.064 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.064 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.064 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.064 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.064 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.064 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.064 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.064 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.064 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.064 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.064 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.064 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.064 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.064 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.064 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.064 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.064 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.064 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.064 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.064 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.064 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.064 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.064 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.064 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.064 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.064 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.064 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.064 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.064 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.064 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.064 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.064 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.064 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.064 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.064 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.064 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.064 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.064 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.064 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.064 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.064 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.064 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.064 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.064 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.064 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.064 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.064 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.064 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.064 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.064 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.064 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.064 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.064 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.064 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.064 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.064 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.064 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.064 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.064 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.064 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.064 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.064 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.064 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.064 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.064 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.064 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.064 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.064 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.064 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.064 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.064 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.064 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.064 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.064 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.064 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.064 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.064 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.064 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.064 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.064 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.064 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.064 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.064 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.064 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.064 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.064 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.064 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.064 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.064 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.064 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.064 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.064 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.064 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.064 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:30.064 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:30.065 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:30.065 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:30.065 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:30.065 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:30.065 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:30.065 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:30.065 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:30.065 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:30.065 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:30.065 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:30.065 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:30.065 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.065 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.065 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 9008148 kB' 'MemAvailable: 10563872 kB' 'Buffers: 2436 kB' 'Cached: 1769896 kB' 'SwapCached: 0 kB' 'Active: 451300 kB' 'Inactive: 1441784 kB' 'Active(anon): 131216 kB' 'Inactive(anon): 0 kB' 'Active(file): 320084 kB' 'Inactive(file): 1441784 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 122376 kB' 'Mapped: 48680 kB' 'Shmem: 10464 kB' 'KReclaimable: 61588 kB' 'Slab: 132732 kB' 'SReclaimable: 61588 kB' 'SUnreclaim: 71144 kB' 'KernelStack: 6348 kB' 'PageTables: 4396 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 351280 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54660 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 157548 kB' 'DirectMap2M: 5085184 kB' 'DirectMap1G: 9437184 kB' 00:03:30.065 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.065 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.065 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.065 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.065 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.065 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.065 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.065 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.065 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.065 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.065 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.065 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.065 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.065 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.065 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.065 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.065 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.065 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.065 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.065 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.065 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.065 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.065 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.065 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.065 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.065 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.065 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.065 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.065 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.065 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.065 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.065 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.065 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.065 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.065 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.065 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.065 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.065 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.065 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.065 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.065 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.065 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.065 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.065 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.065 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.065 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.065 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.065 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.065 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.065 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.065 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.065 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.065 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.065 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.065 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.065 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.065 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.065 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.065 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.065 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.065 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.065 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.065 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.065 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.065 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.065 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.065 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.065 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.065 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.065 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.065 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.065 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.065 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.065 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.065 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.065 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.065 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.065 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.065 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.065 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.065 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.065 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.065 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.065 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.065 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.065 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.066 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.066 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.066 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.066 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.066 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.066 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.066 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.066 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.066 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.066 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.066 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.066 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.066 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.066 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.066 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.066 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.066 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.066 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.066 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.066 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.066 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.066 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.066 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.066 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.066 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.066 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.066 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.066 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.066 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.066 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.066 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.066 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.066 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.066 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.066 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.066 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.066 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.066 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.066 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.066 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.066 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.066 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.066 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.066 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.066 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.066 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.066 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.066 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.066 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.066 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.066 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.066 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.066 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.066 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.066 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.066 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.066 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.066 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.066 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.066 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.066 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.066 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.066 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.066 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.066 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.066 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.066 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.066 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.066 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.066 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.066 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.066 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.066 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.066 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.066 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.066 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.066 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.066 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.066 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.066 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.066 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.066 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.066 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.066 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.066 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.066 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.066 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.066 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.066 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.066 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.066 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.066 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.066 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.066 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.066 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.066 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.066 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.066 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.066 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.066 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.066 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.066 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.066 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.066 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.066 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.066 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.066 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.066 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.066 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.066 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.066 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.067 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.067 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.067 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.067 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.067 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.067 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.067 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.067 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.067 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:30.067 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:30.067 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:30.067 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:30.067 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:30.067 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:30.067 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:30.067 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:30.067 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:30.067 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:30.067 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:30.067 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:30.067 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:30.067 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.067 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.067 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 9008480 kB' 'MemAvailable: 10564204 kB' 'Buffers: 2436 kB' 'Cached: 1769896 kB' 'SwapCached: 0 kB' 'Active: 451308 kB' 'Inactive: 1441784 kB' 'Active(anon): 131224 kB' 'Inactive(anon): 0 kB' 'Active(file): 320084 kB' 'Inactive(file): 1441784 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 122376 kB' 'Mapped: 48680 kB' 'Shmem: 10464 kB' 'KReclaimable: 61588 kB' 'Slab: 132728 kB' 'SReclaimable: 61588 kB' 'SUnreclaim: 71140 kB' 'KernelStack: 6332 kB' 'PageTables: 4348 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 351280 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54660 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 157548 kB' 'DirectMap2M: 5085184 kB' 'DirectMap1G: 9437184 kB' 00:03:30.067 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.067 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.067 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.067 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.067 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.067 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.067 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.067 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.067 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.067 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.067 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.067 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.067 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.067 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.067 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.067 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.067 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.067 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.067 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.067 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.067 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.067 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.067 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.067 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.067 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.067 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.067 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.067 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.067 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.067 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.067 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.067 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.067 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.067 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.067 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.067 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.067 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.067 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.067 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.067 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.067 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.067 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.067 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.067 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.067 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.067 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.067 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.067 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.067 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.067 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.067 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.067 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.067 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.067 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.067 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.067 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.067 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.067 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.067 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.067 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.067 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.067 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.067 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.067 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.067 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.067 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.067 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.067 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.067 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.067 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.067 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.067 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.068 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.068 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.068 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.068 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.068 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.068 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.068 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.068 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.068 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.068 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.068 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.068 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.068 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.068 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.068 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.068 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.068 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.068 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.068 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.068 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.068 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.068 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.068 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.068 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.068 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.068 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.068 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.068 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.068 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.068 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.068 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.068 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.068 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.068 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.068 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.068 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.068 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.068 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.068 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.068 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.068 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.068 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.068 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.068 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.068 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.068 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.068 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.068 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.068 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.068 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.068 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.068 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.068 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.068 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.068 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.068 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.068 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.068 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.068 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.068 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.068 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.068 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.068 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.068 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.068 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.068 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.068 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.068 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.068 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.068 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.068 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.068 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.068 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.068 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.068 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.068 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.068 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.068 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.068 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.068 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.068 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.068 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.068 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.068 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.068 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.068 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.068 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.068 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.068 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.068 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.068 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.068 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.069 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.069 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.069 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.069 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.069 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.069 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.069 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.069 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.069 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.069 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.069 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.069 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.069 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.069 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.069 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.069 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.069 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.069 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.069 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.069 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.069 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.069 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.069 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.069 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.069 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.069 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.069 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.069 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.069 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.069 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.069 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.069 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.069 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.069 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.069 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.069 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.069 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.069 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:30.069 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:30.069 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:30.069 nr_hugepages=512 00:03:30.069 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:03:30.069 resv_hugepages=0 00:03:30.069 surplus_hugepages=0 00:03:30.069 anon_hugepages=0 00:03:30.069 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:30.069 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:30.069 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:30.069 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:03:30.069 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:03:30.069 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:30.069 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:30.069 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:30.069 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:30.069 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:30.069 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:30.069 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:30.069 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:30.069 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:30.069 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:30.069 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.069 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.069 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 9008480 kB' 'MemAvailable: 10564204 kB' 'Buffers: 2436 kB' 'Cached: 1769896 kB' 'SwapCached: 0 kB' 'Active: 451312 kB' 'Inactive: 1441784 kB' 'Active(anon): 131228 kB' 'Inactive(anon): 0 kB' 'Active(file): 320084 kB' 'Inactive(file): 1441784 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 122648 kB' 'Mapped: 48680 kB' 'Shmem: 10464 kB' 'KReclaimable: 61588 kB' 'Slab: 132728 kB' 'SReclaimable: 61588 kB' 'SUnreclaim: 71140 kB' 'KernelStack: 6332 kB' 'PageTables: 4348 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 351280 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54676 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 157548 kB' 'DirectMap2M: 5085184 kB' 'DirectMap1G: 9437184 kB' 00:03:30.069 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.069 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.069 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.069 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.069 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.069 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.069 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.069 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.069 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.069 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.069 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.069 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.069 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.069 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.069 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.069 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.069 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.069 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.069 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.069 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.069 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.069 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.069 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.069 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.069 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.069 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.069 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.069 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.069 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.069 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.069 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.069 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.069 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.069 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.069 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.069 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.069 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.069 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.069 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.069 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.070 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.070 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.070 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.070 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.070 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.070 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.070 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.070 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.070 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.070 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.070 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.070 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.070 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.070 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.070 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.070 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.070 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.070 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.070 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.070 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.070 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.070 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.070 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.070 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.070 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.070 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.070 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.070 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.070 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.070 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.070 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.070 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.070 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.070 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.070 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.070 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.070 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.070 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.070 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.070 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.070 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.070 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.070 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.070 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.070 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.070 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.070 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.070 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.070 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.070 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.070 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.070 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.070 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.070 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.070 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.070 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.070 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.070 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.070 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.070 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.070 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.070 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.070 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.070 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.070 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.070 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.070 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.070 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.070 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.070 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.070 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.070 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.070 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.070 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.070 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.070 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.070 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.070 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.070 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.070 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.070 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.070 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.070 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.070 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.070 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.070 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.070 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.070 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.070 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.070 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.070 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.070 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.070 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.070 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.070 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.070 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.070 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.070 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.070 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.070 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.070 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.070 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.070 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.070 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.070 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.070 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.070 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.070 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.070 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.070 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.070 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.070 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.070 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.071 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.071 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.071 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.071 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.071 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.071 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.071 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.071 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.071 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.071 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.071 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.071 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.071 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.071 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.071 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.071 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.071 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.071 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.071 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.071 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.071 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.071 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.071 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.071 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.071 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.071 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.071 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.071 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.071 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.071 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.071 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.071 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.071 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.071 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.071 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.071 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.071 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.071 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.071 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.071 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.071 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 512 00:03:30.071 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:30.071 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:03:30.071 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:30.071 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:30.071 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:30.071 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:30.071 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:30.071 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:30.071 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:30.071 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:30.071 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:30.071 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:30.071 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:03:30.071 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:30.071 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:30.071 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:30.071 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:30.071 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:30.071 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:30.071 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:30.071 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.071 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 9008480 kB' 'MemUsed: 3233492 kB' 'SwapCached: 0 kB' 'Active: 451524 kB' 'Inactive: 1441784 kB' 'Active(anon): 131440 kB' 'Inactive(anon): 0 kB' 'Active(file): 320084 kB' 'Inactive(file): 1441784 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'FilePages: 1772332 kB' 'Mapped: 48680 kB' 'AnonPages: 122596 kB' 'Shmem: 10464 kB' 'KernelStack: 6332 kB' 'PageTables: 4348 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 61588 kB' 'Slab: 132724 kB' 'SReclaimable: 61588 kB' 'SUnreclaim: 71136 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:30.071 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.071 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.071 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.071 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.071 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.071 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.071 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.071 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.071 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.071 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.071 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.071 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.071 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.071 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.071 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.071 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.071 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.071 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.071 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.071 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.071 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.071 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.071 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.071 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.071 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.071 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.071 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.071 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.071 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.071 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.071 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.071 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.071 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.071 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.071 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.071 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.071 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.071 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.071 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.071 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.071 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.072 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.072 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.072 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.072 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.072 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.072 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.072 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.072 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.072 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.072 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.072 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.072 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.072 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.072 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.072 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.072 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.072 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.072 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.072 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.072 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.072 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.072 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.072 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.072 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.072 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.072 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.072 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.072 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.072 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.072 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.072 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.072 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.072 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.072 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.072 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.072 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.072 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.072 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.072 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.072 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.072 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.072 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.072 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.072 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.072 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.072 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.072 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.072 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.072 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.072 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.072 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.072 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.072 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.072 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.072 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.072 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.072 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.072 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.072 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.072 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.072 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.072 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.072 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.072 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.072 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.072 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.072 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.072 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.072 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.072 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.072 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.072 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.072 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.072 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.072 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.072 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.072 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.072 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.072 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.072 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.072 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.072 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.072 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.072 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.072 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.072 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.072 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.072 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.072 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.072 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.072 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.072 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.072 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.072 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.072 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.072 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.072 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.073 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.073 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.073 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.073 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.073 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:30.073 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.073 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.073 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.073 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:30.073 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:30.073 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:30.073 node0=512 expecting 512 00:03:30.073 ************************************ 00:03:30.073 END TEST per_node_1G_alloc 00:03:30.073 ************************************ 00:03:30.073 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:30.073 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:30.073 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:30.073 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:30.073 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:30.073 00:03:30.073 real 0m0.600s 00:03:30.073 user 0m0.270s 00:03:30.073 sys 0m0.341s 00:03:30.073 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:30.073 01:58:38 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:30.073 01:58:38 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:30.073 01:58:38 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:03:30.073 01:58:38 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:30.073 01:58:38 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:30.073 01:58:38 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:30.073 ************************************ 00:03:30.073 START TEST even_2G_alloc 00:03:30.073 ************************************ 00:03:30.073 01:58:38 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1123 -- # even_2G_alloc 00:03:30.073 01:58:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:03:30.073 01:58:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:30.073 01:58:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:30.073 01:58:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:30.073 01:58:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:30.073 01:58:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:30.073 01:58:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:30.073 01:58:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:30.073 01:58:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:30.073 01:58:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:30.073 01:58:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:30.073 01:58:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:30.073 01:58:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:30.073 01:58:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:30.073 01:58:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:30.073 01:58:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:03:30.073 01:58:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:30.073 01:58:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:30.073 01:58:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:30.073 01:58:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:03:30.073 01:58:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:03:30.073 01:58:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:03:30.073 01:58:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:30.073 01:58:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:30.648 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:30.648 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:30.648 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:30.648 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:03:30.648 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:30.648 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:30.648 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:30.648 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:30.648 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:30.648 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:30.648 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:30.648 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:30.648 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:30.648 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:30.648 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:30.648 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:30.648 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:30.648 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:30.648 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:30.648 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:30.648 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:30.648 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.648 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.648 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7957492 kB' 'MemAvailable: 9513212 kB' 'Buffers: 2436 kB' 'Cached: 1769892 kB' 'SwapCached: 0 kB' 'Active: 451436 kB' 'Inactive: 1441780 kB' 'Active(anon): 131352 kB' 'Inactive(anon): 0 kB' 'Active(file): 320084 kB' 'Inactive(file): 1441780 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 122808 kB' 'Mapped: 48940 kB' 'Shmem: 10464 kB' 'KReclaimable: 61588 kB' 'Slab: 132720 kB' 'SReclaimable: 61588 kB' 'SUnreclaim: 71132 kB' 'KernelStack: 6308 kB' 'PageTables: 4068 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 350912 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54708 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 157548 kB' 'DirectMap2M: 5085184 kB' 'DirectMap1G: 9437184 kB' 00:03:30.648 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.648 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:30.648 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.648 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.648 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.648 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:30.648 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.648 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.648 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.648 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:30.648 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.648 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.648 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.648 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:30.648 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.648 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.648 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.648 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:30.648 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.648 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.648 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.648 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:30.648 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.648 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.649 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.649 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:30.649 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.649 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.649 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.649 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:30.649 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.649 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.649 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.649 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:30.649 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.649 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.649 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.649 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:30.649 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.649 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.649 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.649 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:30.649 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.649 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.649 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.649 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:30.649 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.649 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.649 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.649 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:30.649 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.649 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.649 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.649 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:30.649 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.649 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.649 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.649 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:30.649 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.649 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.649 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.649 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:30.649 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.649 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.649 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.649 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:30.649 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.649 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.649 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.649 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:30.649 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.649 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.649 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.649 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:30.649 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.649 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.649 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.649 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:30.649 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.649 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.649 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.649 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:30.649 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.649 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.649 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.649 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:30.649 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.649 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.649 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.649 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:30.649 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.649 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.649 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.649 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:30.649 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.649 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.649 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.649 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:30.649 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.649 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.649 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.649 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:30.649 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.649 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.649 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.649 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:30.649 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.649 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.649 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.650 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:30.650 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.650 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.650 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.650 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:30.650 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.650 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.650 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.650 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:30.650 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.650 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.650 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.650 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:30.650 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.650 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.650 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.650 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:30.650 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.650 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.650 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.650 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:30.650 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.650 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.650 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.650 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:30.650 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.650 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.650 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.650 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:30.650 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.650 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.650 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.650 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:30.650 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.650 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.650 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.650 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:30.650 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.650 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.650 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.650 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:30.650 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.650 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.650 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.650 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:30.650 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.650 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.650 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.650 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:30.650 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.650 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.650 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.650 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:30.650 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:30.650 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:30.650 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:30.650 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:30.650 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:30.650 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:30.650 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:30.650 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:30.650 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:30.650 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:30.650 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:30.650 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:30.650 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.650 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.650 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7957492 kB' 'MemAvailable: 9513212 kB' 'Buffers: 2436 kB' 'Cached: 1769892 kB' 'SwapCached: 0 kB' 'Active: 451436 kB' 'Inactive: 1441780 kB' 'Active(anon): 131352 kB' 'Inactive(anon): 0 kB' 'Active(file): 320084 kB' 'Inactive(file): 1441780 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 122528 kB' 'Mapped: 48680 kB' 'Shmem: 10464 kB' 'KReclaimable: 61588 kB' 'Slab: 132724 kB' 'SReclaimable: 61588 kB' 'SUnreclaim: 71136 kB' 'KernelStack: 6308 kB' 'PageTables: 4068 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 351280 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54676 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 157548 kB' 'DirectMap2M: 5085184 kB' 'DirectMap1G: 9437184 kB' 00:03:30.650 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.650 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:30.650 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.650 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.650 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.650 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:30.650 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.651 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.651 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.651 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:30.651 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.651 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.651 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.651 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:30.651 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.651 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.651 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.651 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:30.651 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.651 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.651 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.651 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:30.651 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.651 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.651 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.651 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:30.651 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.651 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.651 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.651 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:30.651 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.651 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.651 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.651 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:30.651 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.651 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.651 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.651 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:30.651 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.651 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.651 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.651 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:30.651 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.651 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.651 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.651 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:30.651 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.651 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.651 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.651 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:30.651 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.651 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.651 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.651 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:30.651 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.651 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.651 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.651 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:30.651 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.651 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.651 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.651 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:30.651 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.651 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.651 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.651 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:30.651 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.651 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.651 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.651 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:30.651 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.651 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.651 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.651 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:30.651 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.651 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.651 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.651 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:30.651 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.651 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.651 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.651 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:30.651 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.651 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.651 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.651 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:30.651 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.651 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.651 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.651 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:30.651 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.651 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.651 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.652 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:30.652 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.652 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.652 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.652 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:30.652 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.652 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.652 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.652 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:30.652 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.652 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.652 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.652 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:30.652 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.652 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.652 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.652 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:30.652 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.652 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.652 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.652 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:30.652 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.652 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.652 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.652 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:30.652 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.652 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.652 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.652 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:30.652 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.652 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.652 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.652 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:30.652 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.652 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.652 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.652 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:30.652 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.652 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.652 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.652 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:30.652 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.652 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.652 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.652 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:30.652 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.652 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.652 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.652 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:30.652 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.652 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.652 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.652 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:30.652 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.652 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.652 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.652 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:30.652 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.652 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.652 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.652 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:30.652 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.652 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.652 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.652 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:30.652 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.652 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.652 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.652 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:30.652 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.652 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.652 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.652 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:30.652 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.652 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.652 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.652 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:30.652 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.652 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.652 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.652 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:30.652 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.652 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.652 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.652 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:30.653 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.653 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.653 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.653 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:30.653 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.653 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.653 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.653 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:30.653 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.653 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.653 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.653 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:30.653 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.653 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.653 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.653 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:30.653 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.653 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.653 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.653 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:30.653 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.653 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.653 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.653 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:30.653 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.653 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.653 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.653 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:30.653 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:30.653 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:30.653 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:30.653 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:30.653 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:30.653 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:30.653 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:30.653 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:30.653 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:30.653 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:30.653 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:30.653 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:30.653 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.653 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7957492 kB' 'MemAvailable: 9513216 kB' 'Buffers: 2436 kB' 'Cached: 1769896 kB' 'SwapCached: 0 kB' 'Active: 451384 kB' 'Inactive: 1441784 kB' 'Active(anon): 131300 kB' 'Inactive(anon): 0 kB' 'Active(file): 320084 kB' 'Inactive(file): 1441784 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 122476 kB' 'Mapped: 48660 kB' 'Shmem: 10464 kB' 'KReclaimable: 61588 kB' 'Slab: 132764 kB' 'SReclaimable: 61588 kB' 'SUnreclaim: 71176 kB' 'KernelStack: 6336 kB' 'PageTables: 4188 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 351280 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54676 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 157548 kB' 'DirectMap2M: 5085184 kB' 'DirectMap1G: 9437184 kB' 00:03:30.653 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.653 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.653 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:30.653 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.653 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.653 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.653 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:30.653 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.653 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.653 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.653 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:30.653 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.653 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.653 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.653 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:30.653 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.653 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.653 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.653 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:30.653 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.653 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.653 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.653 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:30.653 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.653 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.653 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.653 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:30.653 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.653 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.653 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.653 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:30.653 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.653 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.654 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.654 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:30.654 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.654 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.654 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.654 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:30.654 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.654 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.654 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.654 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:30.654 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.654 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.654 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.654 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:30.654 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.654 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.654 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.654 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:30.654 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.654 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.654 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.654 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:30.654 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.654 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.654 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.654 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:30.654 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.654 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.654 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.654 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:30.654 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.654 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.654 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.654 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:30.654 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.654 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.654 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.654 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:30.654 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.654 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.654 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.654 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:30.654 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.654 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.654 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.654 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:30.654 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.654 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.654 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.654 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:30.654 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.654 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.654 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.654 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:30.654 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.654 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.654 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.654 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:30.654 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.654 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.654 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.655 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:30.655 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.655 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.655 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.655 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:30.655 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.655 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.655 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.655 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:30.655 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.655 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.655 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.655 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:30.655 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.655 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.655 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.655 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:30.655 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.655 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.655 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.655 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:30.655 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.655 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.655 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.655 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:30.655 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.655 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.655 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.655 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:30.655 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.655 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.655 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.655 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:30.655 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.655 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.655 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.655 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:30.655 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.655 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.655 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.655 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:30.655 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.655 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.655 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.655 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:30.655 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.655 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.655 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.655 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:30.655 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.655 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.655 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.655 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:30.655 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.655 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.655 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.655 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:30.655 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.655 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.655 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.655 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:30.655 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.655 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.655 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.655 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:30.655 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.655 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.655 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.655 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:30.655 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.655 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.655 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.655 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:30.655 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.655 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.655 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.655 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:30.655 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.655 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.655 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.655 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:30.655 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.655 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.655 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.655 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:30.656 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.656 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.656 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.656 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:30.656 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.656 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.656 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.656 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:30.656 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.656 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.656 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.656 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:30.656 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.656 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.656 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.656 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:30.656 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.656 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.656 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.656 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:30.656 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.656 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.656 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.656 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:30.656 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:30.656 nr_hugepages=1024 00:03:30.656 resv_hugepages=0 00:03:30.656 surplus_hugepages=0 00:03:30.656 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:30.656 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:30.656 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:30.656 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:30.656 anon_hugepages=0 00:03:30.656 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:30.656 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:30.656 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:30.656 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:30.656 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:30.656 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:30.656 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:30.656 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:30.656 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:30.656 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:30.656 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:30.656 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:30.656 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:30.656 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.656 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.656 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7957752 kB' 'MemAvailable: 9513476 kB' 'Buffers: 2436 kB' 'Cached: 1769896 kB' 'SwapCached: 0 kB' 'Active: 451160 kB' 'Inactive: 1441784 kB' 'Active(anon): 131076 kB' 'Inactive(anon): 0 kB' 'Active(file): 320084 kB' 'Inactive(file): 1441784 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 122220 kB' 'Mapped: 48660 kB' 'Shmem: 10464 kB' 'KReclaimable: 61588 kB' 'Slab: 132764 kB' 'SReclaimable: 61588 kB' 'SUnreclaim: 71176 kB' 'KernelStack: 6320 kB' 'PageTables: 4136 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 351280 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54676 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 157548 kB' 'DirectMap2M: 5085184 kB' 'DirectMap1G: 9437184 kB' 00:03:30.656 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.656 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:30.656 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.656 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.656 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.656 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:30.656 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.656 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.656 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.656 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:30.656 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.656 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.656 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.656 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:30.656 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.656 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.656 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.656 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:30.656 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.656 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.656 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.656 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:30.656 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.656 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.656 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.656 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:30.656 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.656 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.657 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.657 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:30.657 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.657 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.657 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.657 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:30.657 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.657 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.657 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.657 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:30.657 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.657 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.657 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.657 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:30.657 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.657 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.657 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.657 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:30.657 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.657 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.657 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.657 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:30.657 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.657 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.657 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.657 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:30.657 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.657 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.657 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.657 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:30.657 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.657 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.657 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.657 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:30.657 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.657 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.657 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.657 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:30.657 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.657 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.657 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.657 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:30.657 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.657 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.657 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.657 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:30.657 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.657 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.657 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.657 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:30.657 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.657 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.657 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.657 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:30.657 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.657 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.657 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.657 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:30.657 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.657 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.657 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.657 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:30.657 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.657 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.657 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.657 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:30.657 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.657 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.657 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.657 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:30.657 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.657 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.657 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.657 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:30.657 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.657 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.657 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.657 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:30.657 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.657 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.657 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.657 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:30.657 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.657 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.657 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.658 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:30.658 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.658 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.658 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.658 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:30.658 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.658 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.658 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.658 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:30.658 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.658 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.658 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.658 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:30.658 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.658 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.658 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.658 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:30.658 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.658 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.658 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.658 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:30.658 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.658 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.658 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.658 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:30.658 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.658 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.658 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.658 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:30.658 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.658 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.658 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.658 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:30.658 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.658 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.658 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.658 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:30.658 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.658 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.658 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.658 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:30.658 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.658 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.658 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.658 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:30.658 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.658 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.658 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.658 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:30.658 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.658 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.658 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.658 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:30.658 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.658 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.658 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.658 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:30.658 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.658 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.658 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.658 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:30.658 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.658 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.658 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.658 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:30.658 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.658 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.658 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.658 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:30.658 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.658 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.658 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.658 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:30.658 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.658 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.658 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.658 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:30.658 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.658 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.658 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.658 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:30.658 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:30.658 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:30.658 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:30.658 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:30.658 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:30.659 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:30.659 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:30.659 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:30.659 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:30.659 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:30.659 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:30.659 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:30.659 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:03:30.659 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:30.659 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:30.659 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:30.659 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:30.659 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:30.659 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:30.659 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:30.659 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.659 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7957928 kB' 'MemUsed: 4284044 kB' 'SwapCached: 0 kB' 'Active: 451148 kB' 'Inactive: 1441784 kB' 'Active(anon): 131064 kB' 'Inactive(anon): 0 kB' 'Active(file): 320084 kB' 'Inactive(file): 1441784 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'FilePages: 1772332 kB' 'Mapped: 48660 kB' 'AnonPages: 122168 kB' 'Shmem: 10464 kB' 'KernelStack: 6320 kB' 'PageTables: 4132 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 61588 kB' 'Slab: 132760 kB' 'SReclaimable: 61588 kB' 'SUnreclaim: 71172 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:30.659 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.659 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.659 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:30.659 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.659 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.659 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.659 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:30.659 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.659 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.659 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.659 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:30.659 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.659 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.659 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.659 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:30.659 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.659 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.659 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.659 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:30.659 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.659 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.659 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.659 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:30.659 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.659 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.659 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.659 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:30.659 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.659 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.659 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.659 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:30.659 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.659 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.659 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.659 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:30.659 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.659 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.659 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.659 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:30.659 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.659 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.659 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.659 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:30.659 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.659 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.659 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.659 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:30.659 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.659 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.659 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.659 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:30.659 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.659 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.659 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.659 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:30.659 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.659 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.659 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.659 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:30.659 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.659 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.660 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.660 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:30.660 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.660 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.660 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.660 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:30.660 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.660 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.660 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.660 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:30.660 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.660 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.660 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.660 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:30.660 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.660 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.660 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.660 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:30.660 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.660 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.660 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.660 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:30.660 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.660 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.660 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.660 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:30.660 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.660 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.660 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.660 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:30.660 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.660 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.660 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.660 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:30.660 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.660 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.660 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.660 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:30.660 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.660 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.660 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.660 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:30.660 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.660 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.660 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.660 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:30.660 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.660 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.660 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.660 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:30.660 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.660 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.660 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.660 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:30.660 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.660 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.660 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.946 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:30.946 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.946 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.946 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.946 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:30.946 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.946 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.946 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.946 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:30.946 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.946 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.946 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.946 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:30.946 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.946 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.946 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.946 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:30.946 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.946 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.946 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.946 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:30.946 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.946 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.946 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.946 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:30.946 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:30.946 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:30.946 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.946 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:30.946 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:30.946 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:30.946 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:30.946 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:30.946 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:30.946 node0=1024 expecting 1024 00:03:30.946 ************************************ 00:03:30.946 END TEST even_2G_alloc 00:03:30.946 ************************************ 00:03:30.946 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:30.946 01:58:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:30.946 00:03:30.946 real 0m0.609s 00:03:30.946 user 0m0.284s 00:03:30.946 sys 0m0.323s 00:03:30.946 01:58:39 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:30.946 01:58:39 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:30.946 01:58:39 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:30.946 01:58:39 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:03:30.946 01:58:39 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:30.946 01:58:39 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:30.946 01:58:39 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:30.946 ************************************ 00:03:30.946 START TEST odd_alloc 00:03:30.947 ************************************ 00:03:30.947 01:58:39 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1123 -- # odd_alloc 00:03:30.947 01:58:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:03:30.947 01:58:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:03:30.947 01:58:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:30.947 01:58:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:30.947 01:58:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:03:30.947 01:58:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:30.947 01:58:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:30.947 01:58:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:30.947 01:58:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:03:30.947 01:58:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:30.947 01:58:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:30.947 01:58:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:30.947 01:58:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:30.947 01:58:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:30.947 01:58:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:30.947 01:58:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:03:30.947 01:58:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:30.947 01:58:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:30.947 01:58:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:30.947 01:58:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:03:30.947 01:58:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:03:30.947 01:58:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:03:30.947 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:30.947 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:31.208 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:31.208 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:31.208 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:31.208 01:58:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:03:31.208 01:58:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:03:31.208 01:58:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:31.208 01:58:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:31.208 01:58:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:31.208 01:58:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:31.208 01:58:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:31.208 01:58:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:31.208 01:58:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:31.208 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:31.208 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:31.208 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:31.208 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:31.208 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:31.208 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:31.208 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:31.208 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:31.208 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:31.208 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.208 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.208 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7951228 kB' 'MemAvailable: 9506944 kB' 'Buffers: 2436 kB' 'Cached: 1769888 kB' 'SwapCached: 0 kB' 'Active: 451736 kB' 'Inactive: 1441776 kB' 'Active(anon): 131652 kB' 'Inactive(anon): 0 kB' 'Active(file): 320084 kB' 'Inactive(file): 1441776 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'AnonPages: 122936 kB' 'Mapped: 48992 kB' 'Shmem: 10464 kB' 'KReclaimable: 61588 kB' 'Slab: 132792 kB' 'SReclaimable: 61588 kB' 'SUnreclaim: 71204 kB' 'KernelStack: 6348 kB' 'PageTables: 4424 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 351280 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54676 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 157548 kB' 'DirectMap2M: 5085184 kB' 'DirectMap1G: 9437184 kB' 00:03:31.208 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.208 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.208 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.208 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.208 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.208 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.208 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.208 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.208 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.208 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.208 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.208 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.208 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.208 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.208 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.208 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.208 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.208 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.208 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.208 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.208 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.208 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.208 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.208 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.208 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.208 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.208 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.208 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.208 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.208 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.208 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.208 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.208 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.208 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.208 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.208 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.208 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.208 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.208 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.208 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.208 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.208 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.208 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.208 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.208 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.208 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.208 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.208 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.208 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.208 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.208 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.208 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.208 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.209 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.209 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.209 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.209 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.209 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.209 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.209 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.209 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.209 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.209 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.209 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.209 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.209 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.209 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.209 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.209 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.209 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.209 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.209 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.209 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.209 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.209 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.209 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.209 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.209 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.209 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.209 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.209 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.209 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.209 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.209 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.209 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.209 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.209 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.209 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.209 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.209 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.209 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.209 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.209 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.209 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.209 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.209 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.209 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.209 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.209 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.209 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.209 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.209 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.209 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.209 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.209 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.209 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.209 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.209 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.209 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.209 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.209 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.209 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.209 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.209 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.209 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.209 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.209 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.209 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.209 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.209 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.209 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.209 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.209 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.209 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.209 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.209 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.209 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.209 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.209 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.209 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.209 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.209 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.209 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.209 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.209 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.209 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.209 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.209 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.209 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.209 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.209 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.209 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.209 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.209 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.209 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.209 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.209 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.209 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.209 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.209 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.209 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.209 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.209 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.209 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.209 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.209 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.209 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.209 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.209 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.209 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.209 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.209 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:31.209 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:31.209 01:58:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:31.209 01:58:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:31.209 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:31.209 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:31.209 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:31.209 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:31.209 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:31.209 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:31.209 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:31.209 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:31.210 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:31.210 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.210 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7951228 kB' 'MemAvailable: 9506948 kB' 'Buffers: 2436 kB' 'Cached: 1769892 kB' 'SwapCached: 0 kB' 'Active: 451720 kB' 'Inactive: 1441780 kB' 'Active(anon): 131636 kB' 'Inactive(anon): 0 kB' 'Active(file): 320084 kB' 'Inactive(file): 1441780 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'AnonPages: 122936 kB' 'Mapped: 48928 kB' 'Shmem: 10464 kB' 'KReclaimable: 61588 kB' 'Slab: 132808 kB' 'SReclaimable: 61588 kB' 'SUnreclaim: 71220 kB' 'KernelStack: 6416 kB' 'PageTables: 4412 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 351280 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54676 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 157548 kB' 'DirectMap2M: 5085184 kB' 'DirectMap1G: 9437184 kB' 00:03:31.210 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.210 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.210 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.210 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.210 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.210 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.210 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.210 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.210 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.210 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.210 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.210 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.210 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.210 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.210 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.210 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.210 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.210 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.210 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.210 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.210 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.210 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.210 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.210 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.210 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.210 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.210 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.210 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.210 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.210 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.210 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.210 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.210 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.210 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.210 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.210 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.210 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.210 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.210 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.210 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.210 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.210 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.210 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.210 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.210 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.210 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.210 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.210 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.210 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.210 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.210 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.210 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.210 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.210 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.210 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.210 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.210 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.210 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.210 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.210 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.210 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.210 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.210 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.210 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.210 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.210 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.210 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.210 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.210 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.210 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.210 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.210 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.210 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.210 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.210 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.210 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.210 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.210 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.210 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.210 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.210 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.210 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.210 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.210 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.210 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.210 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.210 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.210 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.210 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.210 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.210 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.210 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.210 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.210 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.210 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.210 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.210 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.210 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.210 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.210 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.210 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.210 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.210 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.210 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.210 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.211 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.211 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.211 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.211 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.211 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.211 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.211 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.211 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.211 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.211 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.211 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.211 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.211 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.211 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.211 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.211 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.211 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.211 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.211 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.211 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.211 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.211 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.211 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.211 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.211 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.211 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.211 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.211 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.211 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.211 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.211 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.211 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.211 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.211 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.211 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.211 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.211 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.211 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.211 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.211 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.211 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.211 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.211 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.211 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.211 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.211 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.211 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.211 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.211 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.211 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.211 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.211 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.211 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.211 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.211 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.211 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.211 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.211 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.211 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.211 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.211 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.211 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.211 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.211 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.211 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.211 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.211 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.211 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.211 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.211 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.211 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.211 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.211 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.211 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.211 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.211 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.211 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.211 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.211 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.211 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.211 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.211 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.211 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.211 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.211 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.211 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.211 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.211 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.211 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.211 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.211 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.211 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.211 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.211 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.211 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.211 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.211 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.211 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.211 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.211 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.211 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.211 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:31.211 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:31.211 01:58:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:31.211 01:58:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:31.211 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:31.211 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:31.211 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:31.211 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:31.211 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:31.211 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:31.211 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:31.211 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:31.211 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:31.211 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.211 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.212 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7951912 kB' 'MemAvailable: 9507632 kB' 'Buffers: 2436 kB' 'Cached: 1769892 kB' 'SwapCached: 0 kB' 'Active: 451516 kB' 'Inactive: 1441780 kB' 'Active(anon): 131432 kB' 'Inactive(anon): 0 kB' 'Active(file): 320084 kB' 'Inactive(file): 1441780 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'AnonPages: 122720 kB' 'Mapped: 48800 kB' 'Shmem: 10464 kB' 'KReclaimable: 61588 kB' 'Slab: 132816 kB' 'SReclaimable: 61588 kB' 'SUnreclaim: 71228 kB' 'KernelStack: 6392 kB' 'PageTables: 4476 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 351280 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54676 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 157548 kB' 'DirectMap2M: 5085184 kB' 'DirectMap1G: 9437184 kB' 00:03:31.212 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.212 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.212 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.212 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.212 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.212 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.212 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.212 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.212 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.212 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.212 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.212 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.212 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.212 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.212 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.212 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.212 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.212 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.212 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.212 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.212 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.212 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.212 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.212 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.212 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.212 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.212 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.212 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.212 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.212 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.212 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.212 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.212 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.212 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.212 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.212 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.474 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.474 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.474 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.474 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.474 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.474 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.474 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.474 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.474 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.474 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.474 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.474 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.474 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.474 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.474 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.474 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.474 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.474 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.474 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.474 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.474 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.474 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.474 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.474 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.474 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.474 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.474 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.474 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.474 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.474 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.474 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.474 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.474 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.474 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.474 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.474 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.474 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.474 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.474 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.474 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.474 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.474 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.474 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.474 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.474 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.474 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.474 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.474 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.474 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.474 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.474 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.474 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.474 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.474 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.474 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.474 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.474 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.474 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.474 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.474 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.474 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.474 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.474 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.474 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.474 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.474 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.474 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.474 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.474 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.474 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.474 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.474 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.474 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.474 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.474 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.474 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.474 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.474 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.474 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.474 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.474 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.474 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.474 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.474 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.474 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.474 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.474 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.474 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.474 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.474 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.474 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.474 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.475 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.475 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.475 01:58:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.475 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.475 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.475 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.475 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.475 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.475 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.475 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.475 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.475 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.475 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.475 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.475 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.475 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.475 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.475 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.475 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.475 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.475 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.475 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.475 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.475 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.475 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.475 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.475 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.475 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.475 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.475 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.475 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.475 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.475 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.475 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.475 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.475 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.475 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.475 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.475 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.475 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.475 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.475 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.475 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.475 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.475 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.475 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.475 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.475 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.475 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.475 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.475 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.475 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.475 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.475 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.475 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.475 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.475 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.475 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.475 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.475 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.475 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.475 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.475 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.475 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.475 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.475 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.475 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.475 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.475 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.475 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.475 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.475 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.475 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.475 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:31.475 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:31.475 nr_hugepages=1025 00:03:31.475 01:58:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:31.475 01:58:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:03:31.475 resv_hugepages=0 00:03:31.475 01:58:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:31.475 surplus_hugepages=0 00:03:31.475 anon_hugepages=0 00:03:31.475 01:58:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:31.475 01:58:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:31.475 01:58:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:31.475 01:58:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:03:31.475 01:58:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:31.475 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:31.475 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:31.475 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:31.475 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:31.475 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:31.475 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:31.475 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:31.475 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:31.475 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:31.475 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.475 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7952248 kB' 'MemAvailable: 9507968 kB' 'Buffers: 2436 kB' 'Cached: 1769892 kB' 'SwapCached: 0 kB' 'Active: 451580 kB' 'Inactive: 1441780 kB' 'Active(anon): 131496 kB' 'Inactive(anon): 0 kB' 'Active(file): 320084 kB' 'Inactive(file): 1441780 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'AnonPages: 122780 kB' 'Mapped: 48800 kB' 'Shmem: 10464 kB' 'KReclaimable: 61588 kB' 'Slab: 132816 kB' 'SReclaimable: 61588 kB' 'SUnreclaim: 71228 kB' 'KernelStack: 6376 kB' 'PageTables: 4432 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 351280 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54676 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 157548 kB' 'DirectMap2M: 5085184 kB' 'DirectMap1G: 9437184 kB' 00:03:31.475 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.475 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.475 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.475 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.475 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.475 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.475 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.475 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.475 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.475 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.475 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.476 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.476 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.476 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.476 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.476 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.476 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.476 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.476 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.476 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.476 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.476 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.476 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.476 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.476 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.476 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.476 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.476 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.476 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.476 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.476 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.476 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.476 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.476 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.476 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.476 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.476 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.476 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.476 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.476 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.476 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.476 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.476 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.476 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.476 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.476 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.476 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.476 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.476 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.476 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.476 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.476 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.476 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.476 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.476 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.476 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.476 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.476 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.476 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.476 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.476 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.476 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.476 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.476 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.476 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.476 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.476 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.476 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.476 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.476 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.476 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.476 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.476 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.476 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.476 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.476 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.476 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.476 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.476 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.476 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.476 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.476 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.476 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.476 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.476 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.476 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.476 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.476 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.476 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.476 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.476 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.476 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.476 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.476 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.476 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.476 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.476 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.476 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.476 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.476 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.476 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.476 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.476 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.476 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.476 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.476 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.476 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.476 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.476 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.476 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.476 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.476 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.476 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.476 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.476 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.476 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.476 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.476 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.476 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.476 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.476 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.476 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.476 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.476 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.476 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.476 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.476 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.476 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.476 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.476 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.476 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.476 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.476 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.477 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.477 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.477 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.477 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.477 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.477 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.477 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.477 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.477 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.477 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.477 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.477 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.477 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.477 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.477 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.477 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.477 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.477 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.477 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.477 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.477 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.477 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.477 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.477 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.477 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.477 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.477 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.477 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.477 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.477 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.477 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.477 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.477 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.477 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.477 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.477 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.477 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.477 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.477 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.477 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.477 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.477 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.477 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.477 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.477 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.477 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.477 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.477 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.477 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.477 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.477 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.477 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.477 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.477 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.477 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.477 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.477 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.477 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.477 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.477 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.477 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.477 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:03:31.477 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:31.477 01:58:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:31.477 01:58:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:31.477 01:58:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:03:31.477 01:58:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:31.477 01:58:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:03:31.477 01:58:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:31.477 01:58:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:31.477 01:58:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:31.477 01:58:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:31.477 01:58:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:31.477 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:31.477 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:03:31.477 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:31.477 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:31.477 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:31.477 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:31.477 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:31.477 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:31.477 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:31.477 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.477 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7952248 kB' 'MemUsed: 4289724 kB' 'SwapCached: 0 kB' 'Active: 451476 kB' 'Inactive: 1441780 kB' 'Active(anon): 131392 kB' 'Inactive(anon): 0 kB' 'Active(file): 320084 kB' 'Inactive(file): 1441780 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'FilePages: 1772328 kB' 'Mapped: 48800 kB' 'AnonPages: 122604 kB' 'Shmem: 10464 kB' 'KernelStack: 6344 kB' 'PageTables: 4328 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 61588 kB' 'Slab: 132812 kB' 'SReclaimable: 61588 kB' 'SUnreclaim: 71224 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:03:31.477 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.477 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.477 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.477 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.477 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.477 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.477 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.477 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.477 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.477 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.477 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.477 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.477 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.477 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.477 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.477 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.477 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.477 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.477 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.477 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.477 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.477 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.477 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.477 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.477 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.477 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.477 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.478 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.478 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.478 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.478 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.478 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.478 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.478 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.478 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.478 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.478 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.478 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.478 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.478 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.478 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.478 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.478 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.478 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.478 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.478 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.478 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.478 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.478 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.478 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.478 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.478 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.478 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.478 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.478 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.478 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.478 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.478 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.478 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.478 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.478 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.478 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.478 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.478 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.478 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.478 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.478 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.478 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.478 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.478 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.478 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.478 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.478 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.478 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.478 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.478 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.478 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.478 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.478 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.478 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.478 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.478 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.478 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.478 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.478 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.478 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.478 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.478 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.478 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.478 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.478 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.478 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.478 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.478 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.478 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.478 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.478 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.478 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.478 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.478 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.478 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.478 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.478 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.478 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.478 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.478 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.478 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.478 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.478 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.478 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.478 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.478 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.478 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.478 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.478 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.478 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.478 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.478 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.478 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.478 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.478 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.478 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.478 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.478 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.478 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.478 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.478 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.478 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.478 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.478 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.478 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.478 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.478 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.478 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.478 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.478 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.478 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.478 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.479 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.479 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.479 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.479 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.479 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.479 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.479 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.479 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.479 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:31.479 01:58:40 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:31.479 01:58:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:31.479 01:58:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:31.479 01:58:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:31.479 01:58:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:31.479 01:58:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:03:31.479 node0=1025 expecting 1025 00:03:31.479 ************************************ 00:03:31.479 END TEST odd_alloc 00:03:31.479 ************************************ 00:03:31.479 01:58:40 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:03:31.479 00:03:31.479 real 0m0.616s 00:03:31.479 user 0m0.270s 00:03:31.479 sys 0m0.344s 00:03:31.479 01:58:40 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:31.479 01:58:40 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:31.479 01:58:40 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:31.479 01:58:40 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:03:31.479 01:58:40 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:31.479 01:58:40 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:31.479 01:58:40 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:31.479 ************************************ 00:03:31.479 START TEST custom_alloc 00:03:31.479 ************************************ 00:03:31.479 01:58:40 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1123 -- # custom_alloc 00:03:31.479 01:58:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:03:31.479 01:58:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:03:31.479 01:58:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:03:31.479 01:58:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:03:31.479 01:58:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:03:31.479 01:58:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:03:31.479 01:58:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:31.479 01:58:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:31.479 01:58:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:31.479 01:58:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:31.479 01:58:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:31.479 01:58:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:31.479 01:58:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:31.479 01:58:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:31.479 01:58:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:31.479 01:58:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:31.479 01:58:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:31.479 01:58:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:31.479 01:58:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:31.479 01:58:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:31.479 01:58:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:31.479 01:58:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:31.479 01:58:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:31.479 01:58:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:31.479 01:58:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:03:31.479 01:58:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:03:31.479 01:58:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:31.479 01:58:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:31.479 01:58:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:31.479 01:58:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:03:31.479 01:58:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:31.479 01:58:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:31.479 01:58:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:31.479 01:58:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:31.479 01:58:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:31.479 01:58:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:31.479 01:58:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:31.479 01:58:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:03:31.479 01:58:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:31.479 01:58:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:31.479 01:58:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:03:31.479 01:58:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:03:31.479 01:58:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:03:31.479 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:31.479 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:31.737 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:32.001 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:32.001 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:32.001 01:58:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:03:32.001 01:58:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:03:32.001 01:58:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:03:32.001 01:58:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:32.001 01:58:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:32.001 01:58:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:32.001 01:58:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:32.001 01:58:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:32.001 01:58:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:32.001 01:58:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:32.001 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:32.001 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:32.001 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:32.001 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:32.001 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:32.001 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:32.001 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:32.001 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:32.001 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:32.001 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.001 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.001 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 9002832 kB' 'MemAvailable: 10558556 kB' 'Buffers: 2436 kB' 'Cached: 1769896 kB' 'SwapCached: 0 kB' 'Active: 451564 kB' 'Inactive: 1441784 kB' 'Active(anon): 131480 kB' 'Inactive(anon): 0 kB' 'Active(file): 320084 kB' 'Inactive(file): 1441784 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'AnonPages: 122844 kB' 'Mapped: 48904 kB' 'Shmem: 10464 kB' 'KReclaimable: 61588 kB' 'Slab: 132756 kB' 'SReclaimable: 61588 kB' 'SUnreclaim: 71168 kB' 'KernelStack: 6376 kB' 'PageTables: 4172 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 351280 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54708 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 157548 kB' 'DirectMap2M: 5085184 kB' 'DirectMap1G: 9437184 kB' 00:03:32.001 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.001 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.001 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.001 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.001 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.001 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.001 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.001 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.001 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.001 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.001 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.001 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.001 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.001 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.001 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.001 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.001 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.001 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.001 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.001 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.001 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.001 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.001 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.001 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.001 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.001 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.001 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.001 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.001 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.001 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.001 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.001 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.001 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.001 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.001 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.001 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.001 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.001 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.001 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.001 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.001 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.001 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.001 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.001 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.001 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.001 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.001 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.001 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.001 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.001 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.001 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.001 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.001 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.001 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.001 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.001 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.001 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.001 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.001 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.001 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.001 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.001 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.001 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.001 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.001 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.001 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.001 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.001 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.002 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.002 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.002 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.002 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.002 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.002 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.002 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.002 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.002 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.002 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.002 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.002 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.002 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.002 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.002 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.002 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.002 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.002 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.002 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.002 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.002 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.002 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.002 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.002 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.002 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.002 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.002 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.002 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.002 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.002 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.002 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.002 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.002 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.002 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.002 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.002 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.002 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.002 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.002 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.002 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.002 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.002 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.002 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.002 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.002 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.002 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.002 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.002 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.002 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.002 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.002 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.002 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.002 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.002 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.002 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.002 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.002 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.002 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.002 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.002 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.002 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.002 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.002 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.002 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.002 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.002 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.002 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.002 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.002 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.002 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.002 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.002 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.002 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.002 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.002 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.002 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.002 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.002 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.002 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.002 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.002 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.002 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.002 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.002 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.002 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.002 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.002 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.002 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.002 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.002 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.002 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.002 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.002 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.002 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:32.002 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:32.002 01:58:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:32.002 01:58:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:32.002 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:32.002 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:32.002 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:32.002 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:32.002 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:32.002 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:32.002 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:32.002 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:32.002 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:32.002 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.002 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.003 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 9002832 kB' 'MemAvailable: 10558556 kB' 'Buffers: 2436 kB' 'Cached: 1769896 kB' 'SwapCached: 0 kB' 'Active: 451184 kB' 'Inactive: 1441784 kB' 'Active(anon): 131100 kB' 'Inactive(anon): 0 kB' 'Active(file): 320084 kB' 'Inactive(file): 1441784 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'AnonPages: 122472 kB' 'Mapped: 48664 kB' 'Shmem: 10464 kB' 'KReclaimable: 61588 kB' 'Slab: 132792 kB' 'SReclaimable: 61588 kB' 'SUnreclaim: 71204 kB' 'KernelStack: 6320 kB' 'PageTables: 4140 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 351280 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54692 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 157548 kB' 'DirectMap2M: 5085184 kB' 'DirectMap1G: 9437184 kB' 00:03:32.003 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.003 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.003 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.003 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.003 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.003 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.003 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.003 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.003 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.003 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.003 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.003 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.003 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.003 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.003 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.003 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.003 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.003 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.003 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.003 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.003 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.003 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.003 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.003 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.003 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.003 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.003 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.003 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.003 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.003 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.003 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.003 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.003 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.003 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.003 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.003 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.003 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.003 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.003 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.003 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.003 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.003 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.003 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.003 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.003 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.003 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.003 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.003 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.003 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.003 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.003 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.003 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.003 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.003 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.003 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.003 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.003 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.003 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.003 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.003 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.003 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.003 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.003 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.003 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.003 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.003 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.003 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.003 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.003 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.003 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.003 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.003 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.003 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.003 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.003 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.003 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.003 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.003 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.003 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.003 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.003 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.003 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.003 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.003 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.003 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.003 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.003 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.003 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.003 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.003 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.003 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.003 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.003 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.003 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.003 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.003 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.003 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.003 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.003 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.003 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.003 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.003 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.003 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.003 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.003 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.003 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.003 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.003 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.003 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.003 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.003 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.003 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.003 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.004 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.004 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.004 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.004 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.004 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.004 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.004 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.004 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.004 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.004 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.004 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.004 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.004 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.004 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.004 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.004 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.004 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.004 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.004 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.004 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.004 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.004 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.004 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.004 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.004 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.004 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.004 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.004 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.004 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.004 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.004 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.004 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.004 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.004 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.004 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.004 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.004 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.004 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.004 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.004 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.004 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.004 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.004 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.004 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.004 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.004 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.004 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.004 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.004 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.004 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.004 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.004 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.004 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.004 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.004 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.004 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.004 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.004 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.004 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.004 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.004 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.004 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.004 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.004 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.004 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.004 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.004 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.004 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.004 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.004 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.004 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.004 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.004 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.004 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.004 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.004 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.004 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.004 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.004 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.004 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.004 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.004 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.004 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.004 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.004 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.004 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.004 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.004 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.004 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.004 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.004 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.004 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.004 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:32.004 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:32.004 01:58:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:32.004 01:58:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:32.004 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:32.004 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:32.004 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:32.004 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:32.004 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:32.004 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:32.004 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:32.004 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:32.004 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:32.004 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.005 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 9002832 kB' 'MemAvailable: 10558556 kB' 'Buffers: 2436 kB' 'Cached: 1769896 kB' 'SwapCached: 0 kB' 'Active: 451200 kB' 'Inactive: 1441784 kB' 'Active(anon): 131116 kB' 'Inactive(anon): 0 kB' 'Active(file): 320084 kB' 'Inactive(file): 1441784 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'AnonPages: 122536 kB' 'Mapped: 48664 kB' 'Shmem: 10464 kB' 'KReclaimable: 61588 kB' 'Slab: 132796 kB' 'SReclaimable: 61588 kB' 'SUnreclaim: 71208 kB' 'KernelStack: 6336 kB' 'PageTables: 4192 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 351280 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54708 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 157548 kB' 'DirectMap2M: 5085184 kB' 'DirectMap1G: 9437184 kB' 00:03:32.005 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.005 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.005 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.005 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.005 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.005 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.005 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.005 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.005 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.005 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.005 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.005 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.005 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.005 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.005 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.005 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.005 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.005 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.005 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.005 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.005 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.005 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.005 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.005 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.005 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.005 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.005 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.005 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.005 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.005 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.005 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.005 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.005 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.005 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.005 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.005 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.005 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.005 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.005 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.005 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.005 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.005 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.005 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.005 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.005 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.005 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.005 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.005 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.005 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.005 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.005 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.005 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.005 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.005 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.005 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.005 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.005 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.005 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.005 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.005 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.005 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.005 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.005 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.005 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.005 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.005 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.005 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.005 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.005 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.005 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.005 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.005 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.005 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.005 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.005 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.005 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.005 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.005 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.005 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.005 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.005 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.005 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.005 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.005 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.005 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.005 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.005 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.005 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.005 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.005 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.005 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.005 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.005 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.006 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.006 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.006 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.006 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.006 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.006 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.006 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.006 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.006 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.006 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.006 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.006 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.006 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.006 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.006 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.006 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.006 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.006 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.006 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.006 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.006 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.006 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.006 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.006 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.006 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.006 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.006 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.006 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.006 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.006 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.006 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.006 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.006 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.006 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.006 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.006 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.006 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.006 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.006 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.006 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.006 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.006 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.006 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.006 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.006 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.006 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.006 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.006 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.006 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.006 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.006 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.006 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.006 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.006 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.006 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.006 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.006 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.006 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.006 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.006 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.006 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.006 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.006 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.006 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.006 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.006 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.006 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.006 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.006 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.006 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.006 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.006 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.006 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.006 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.006 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.006 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.006 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.006 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.006 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.006 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.006 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.006 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.006 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.006 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.006 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.006 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.006 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.006 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.006 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.006 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.006 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.006 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.006 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.006 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.006 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.006 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.006 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.006 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.006 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.006 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.006 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.006 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.006 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.006 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.006 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.006 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.006 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.006 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.006 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.006 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:32.006 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:32.006 01:58:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:32.006 01:58:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:03:32.006 nr_hugepages=512 00:03:32.006 01:58:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:32.006 resv_hugepages=0 00:03:32.007 01:58:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:32.007 surplus_hugepages=0 00:03:32.007 01:58:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:32.007 anon_hugepages=0 00:03:32.007 01:58:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:03:32.007 01:58:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:03:32.007 01:58:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:32.007 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:32.007 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:32.007 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:32.007 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:32.007 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:32.007 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:32.007 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:32.007 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:32.007 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:32.007 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.007 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.007 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 9002832 kB' 'MemAvailable: 10558556 kB' 'Buffers: 2436 kB' 'Cached: 1769896 kB' 'SwapCached: 0 kB' 'Active: 451244 kB' 'Inactive: 1441784 kB' 'Active(anon): 131160 kB' 'Inactive(anon): 0 kB' 'Active(file): 320084 kB' 'Inactive(file): 1441784 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'AnonPages: 122536 kB' 'Mapped: 48664 kB' 'Shmem: 10464 kB' 'KReclaimable: 61588 kB' 'Slab: 132796 kB' 'SReclaimable: 61588 kB' 'SUnreclaim: 71208 kB' 'KernelStack: 6336 kB' 'PageTables: 4192 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 351280 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54708 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 157548 kB' 'DirectMap2M: 5085184 kB' 'DirectMap1G: 9437184 kB' 00:03:32.007 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.007 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.007 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.007 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.007 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.007 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.007 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.007 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.007 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.007 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.007 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.007 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.007 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.007 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.007 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.007 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.007 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.007 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.007 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.007 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.007 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.007 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.007 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.007 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.007 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.007 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.007 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.007 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.007 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.007 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.007 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.007 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.007 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.007 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.007 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.007 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.007 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.007 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.007 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.007 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.007 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.007 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.007 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.007 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.007 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.007 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.007 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.007 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.007 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.007 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.007 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.007 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.007 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.007 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.007 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.007 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.007 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.007 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.007 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.007 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.007 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.007 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.007 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.007 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.007 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.007 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.007 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.007 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.007 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.007 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.007 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.007 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.007 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.007 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.007 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.007 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.007 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.007 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.007 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.007 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.007 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.007 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.007 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.007 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.007 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.007 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.008 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.008 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.008 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.008 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.008 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.008 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.008 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.008 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.008 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.008 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.008 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.008 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.008 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.008 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.008 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.008 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.008 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.008 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.008 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.008 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.008 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.008 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.008 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.008 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.008 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.008 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.008 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.008 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.008 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.008 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.008 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.008 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.008 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.008 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.008 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.008 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.008 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.008 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.008 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.008 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.008 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.008 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.008 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.008 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.008 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.008 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.008 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.008 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.008 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.008 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.008 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.008 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.008 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.008 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.008 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.008 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.008 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.008 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.008 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.008 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.008 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.008 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.008 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.008 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.008 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.008 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.008 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.008 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.008 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.008 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.008 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.008 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.008 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.008 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.008 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.008 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.008 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.008 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.008 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.008 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.008 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.008 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.008 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.008 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.008 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.008 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.008 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.008 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.008 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.008 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.008 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.008 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.008 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.008 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.008 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.008 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.008 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.008 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.008 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.008 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.008 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.008 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.008 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.008 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.008 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.008 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.008 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.008 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 512 00:03:32.008 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:32.008 01:58:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:03:32.008 01:58:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:32.008 01:58:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:03:32.008 01:58:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:32.008 01:58:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:32.008 01:58:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:32.008 01:58:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:32.008 01:58:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:32.009 01:58:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:32.009 01:58:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:32.009 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:32.009 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:03:32.009 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:32.009 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:32.009 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:32.009 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:32.009 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:32.009 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:32.009 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:32.009 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.009 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.009 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 9002832 kB' 'MemUsed: 3239140 kB' 'SwapCached: 0 kB' 'Active: 451296 kB' 'Inactive: 1441784 kB' 'Active(anon): 131212 kB' 'Inactive(anon): 0 kB' 'Active(file): 320084 kB' 'Inactive(file): 1441784 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'FilePages: 1772332 kB' 'Mapped: 48664 kB' 'AnonPages: 122372 kB' 'Shmem: 10464 kB' 'KernelStack: 6336 kB' 'PageTables: 4196 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 61588 kB' 'Slab: 132784 kB' 'SReclaimable: 61588 kB' 'SUnreclaim: 71196 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:32.009 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.009 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.009 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.009 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.009 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.009 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.009 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.009 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.009 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.009 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.009 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.009 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.009 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.009 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.009 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.009 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.009 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.009 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.009 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.009 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.009 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.009 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.009 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.009 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.009 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.009 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.009 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.009 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.009 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.009 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.009 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.009 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.009 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.009 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.009 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.009 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.009 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.009 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.009 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.009 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.009 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.009 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.009 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.009 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.009 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.009 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.009 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.009 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.009 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.009 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.009 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.009 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.009 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.009 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.009 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.009 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.009 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.009 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.009 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.009 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.009 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.009 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.009 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.009 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.009 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.009 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.009 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.009 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.009 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.009 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.009 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.009 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.009 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.009 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.009 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.009 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.009 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.009 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.009 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.009 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.009 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.009 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.009 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.009 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.009 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.009 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.009 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.009 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.009 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.009 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.009 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.009 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.009 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.009 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.268 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.268 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.268 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.268 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.268 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.268 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.268 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.268 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.268 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.268 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.268 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.268 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.268 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.268 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.268 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.268 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.268 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.268 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.268 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.268 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.268 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.268 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.268 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.268 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.268 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.268 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.268 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.268 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.268 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.268 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.268 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.268 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.268 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.268 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.268 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.268 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.268 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.268 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.268 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.268 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.268 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.268 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.268 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.268 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.268 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.268 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.268 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.268 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:32.268 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.268 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.268 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.268 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:32.268 01:58:40 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:32.268 01:58:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:32.268 01:58:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:32.268 01:58:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:32.268 node0=512 expecting 512 00:03:32.268 ************************************ 00:03:32.268 END TEST custom_alloc 00:03:32.268 ************************************ 00:03:32.268 01:58:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:32.268 01:58:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:32.268 01:58:40 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:32.268 00:03:32.268 real 0m0.632s 00:03:32.268 user 0m0.288s 00:03:32.268 sys 0m0.327s 00:03:32.268 01:58:40 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:32.268 01:58:40 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:32.268 01:58:40 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:32.268 01:58:40 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:03:32.268 01:58:40 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:32.268 01:58:40 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:32.268 01:58:40 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:32.268 ************************************ 00:03:32.268 START TEST no_shrink_alloc 00:03:32.268 ************************************ 00:03:32.268 01:58:40 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1123 -- # no_shrink_alloc 00:03:32.268 01:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:03:32.268 01:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:32.268 01:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:32.268 01:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:03:32.268 01:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:32.268 01:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:32.268 01:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:32.268 01:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:32.268 01:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:32.268 01:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:32.268 01:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:32.268 01:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:32.268 01:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:32.268 01:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:32.268 01:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:32.268 01:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:32.268 01:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:32.268 01:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:32.268 01:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:32.268 01:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:03:32.268 01:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:32.268 01:58:40 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:32.528 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:32.528 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:32.529 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:32.529 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:03:32.529 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:03:32.529 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:32.529 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:32.529 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:32.529 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:32.529 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:32.529 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:32.529 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:32.529 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:32.529 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:32.529 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:32.529 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:32.529 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:32.529 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:32.529 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:32.529 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:32.529 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:32.529 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.529 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.529 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7955520 kB' 'MemAvailable: 9511244 kB' 'Buffers: 2436 kB' 'Cached: 1769896 kB' 'SwapCached: 0 kB' 'Active: 447776 kB' 'Inactive: 1441784 kB' 'Active(anon): 127692 kB' 'Inactive(anon): 0 kB' 'Active(file): 320084 kB' 'Inactive(file): 1441784 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 248 kB' 'Writeback: 0 kB' 'AnonPages: 118756 kB' 'Mapped: 48056 kB' 'Shmem: 10464 kB' 'KReclaimable: 61588 kB' 'Slab: 132732 kB' 'SReclaimable: 61588 kB' 'SUnreclaim: 71144 kB' 'KernelStack: 6264 kB' 'PageTables: 3700 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 336088 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54612 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 157548 kB' 'DirectMap2M: 5085184 kB' 'DirectMap1G: 9437184 kB' 00:03:32.529 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.529 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.529 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.529 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.529 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.529 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.529 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.529 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.529 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.529 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.529 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.529 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.529 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.529 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.529 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.529 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.529 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.529 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.529 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.529 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.529 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.529 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.529 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.529 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.529 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.529 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.529 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.529 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.529 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.529 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.529 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.529 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.529 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.529 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.529 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.529 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.529 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.529 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.529 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.529 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.529 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.529 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.529 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.529 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.529 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.529 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.529 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.529 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.529 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.529 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.529 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.529 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.529 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.529 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.529 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.529 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.529 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.529 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.529 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.529 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.529 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.529 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.529 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.529 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.529 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.529 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.529 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.529 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.529 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.529 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.529 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.529 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.529 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.529 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.529 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.529 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.529 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.529 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.529 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.529 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.530 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.530 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.530 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.530 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.530 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.530 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.530 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.530 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.530 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.530 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.530 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.530 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.530 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.530 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.530 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.530 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.530 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.530 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.530 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.530 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.530 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.530 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.530 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.530 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.530 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.530 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.530 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.530 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.530 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.530 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.530 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.530 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.530 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.530 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.530 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.530 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.530 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.530 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.530 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.530 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.530 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.530 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.530 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.530 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.530 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.530 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.530 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.530 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.530 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.530 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.530 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.530 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.530 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.530 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.530 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.530 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.530 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.530 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.530 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.530 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.530 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.530 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.530 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.530 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.530 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.530 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.530 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.530 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.530 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.530 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.530 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.530 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.530 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.530 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.530 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.530 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.530 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.530 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.530 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.530 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.530 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:32.530 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:32.530 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:32.530 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:32.530 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:32.530 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:32.530 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:32.530 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:32.530 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:32.530 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:32.530 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:32.530 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:32.530 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:32.530 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:32.530 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.530 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7956572 kB' 'MemAvailable: 9512296 kB' 'Buffers: 2436 kB' 'Cached: 1769896 kB' 'SwapCached: 0 kB' 'Active: 447300 kB' 'Inactive: 1441784 kB' 'Active(anon): 127216 kB' 'Inactive(anon): 0 kB' 'Active(file): 320084 kB' 'Inactive(file): 1441784 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 248 kB' 'Writeback: 0 kB' 'AnonPages: 118332 kB' 'Mapped: 48004 kB' 'Shmem: 10464 kB' 'KReclaimable: 61588 kB' 'Slab: 132716 kB' 'SReclaimable: 61588 kB' 'SUnreclaim: 71128 kB' 'KernelStack: 6240 kB' 'PageTables: 3736 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 335720 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54580 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 157548 kB' 'DirectMap2M: 5085184 kB' 'DirectMap1G: 9437184 kB' 00:03:32.530 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.530 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.530 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.530 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.530 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.530 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.530 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.531 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.531 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.531 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.531 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.531 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.531 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.531 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.531 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.531 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.531 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.531 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.531 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.531 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.531 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.531 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.531 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.531 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.531 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.531 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.531 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.531 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.531 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.531 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.531 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.531 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.531 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.531 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.531 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.531 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.794 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.794 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.794 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.794 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.794 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.794 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.794 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.794 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.794 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.794 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.794 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.794 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.794 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.794 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.794 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.794 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.794 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.794 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.794 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.794 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.794 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.794 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.794 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.794 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.794 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.794 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.794 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.794 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.794 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.794 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.794 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.794 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.794 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.794 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.794 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.794 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.794 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.794 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.794 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.794 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.794 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.794 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.794 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.794 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.794 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.794 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.794 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.794 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.794 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.794 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.794 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.794 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.794 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.794 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.794 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.794 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.794 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.794 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.794 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.794 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.795 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.795 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.795 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.795 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.795 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.795 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.795 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.795 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.795 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.795 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.795 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.795 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.795 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.795 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.795 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.795 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.795 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.795 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.795 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.795 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.795 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.795 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.795 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.795 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.795 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.795 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.795 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.795 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.795 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.795 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.795 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.795 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.795 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.795 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.795 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.795 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.795 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.795 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.795 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.795 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.795 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.795 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.795 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.795 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.795 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.795 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.795 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.795 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.795 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.795 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.795 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.795 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.795 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.795 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.795 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.795 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.795 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.795 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.795 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.795 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.795 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.795 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.795 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.795 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.795 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.795 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.795 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.795 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.795 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.795 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.795 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.795 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.795 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.795 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.795 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.795 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.795 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.795 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.795 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.795 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.795 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.795 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.795 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.795 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.795 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.795 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.795 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.795 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.795 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.795 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.795 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.795 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.795 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.795 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.795 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.795 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.795 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.795 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.795 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.795 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.795 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.795 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.795 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.795 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.795 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.795 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.795 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.795 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.795 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.795 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.795 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:32.795 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:32.795 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:32.795 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:32.795 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:32.796 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:32.796 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:32.796 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:32.796 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:32.796 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:32.796 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:32.796 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:32.796 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:32.796 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.796 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.796 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7956640 kB' 'MemAvailable: 9512364 kB' 'Buffers: 2436 kB' 'Cached: 1769896 kB' 'SwapCached: 0 kB' 'Active: 447428 kB' 'Inactive: 1441784 kB' 'Active(anon): 127344 kB' 'Inactive(anon): 0 kB' 'Active(file): 320084 kB' 'Inactive(file): 1441784 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 248 kB' 'Writeback: 0 kB' 'AnonPages: 118504 kB' 'Mapped: 47984 kB' 'Shmem: 10464 kB' 'KReclaimable: 61588 kB' 'Slab: 132716 kB' 'SReclaimable: 61588 kB' 'SUnreclaim: 71128 kB' 'KernelStack: 6208 kB' 'PageTables: 3644 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 336088 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54548 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 157548 kB' 'DirectMap2M: 5085184 kB' 'DirectMap1G: 9437184 kB' 00:03:32.796 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.796 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.796 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.796 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.796 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.796 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.796 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.796 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.796 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.796 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.796 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.796 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.796 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.796 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.796 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.796 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.796 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.796 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.796 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.796 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.796 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.796 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.796 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.796 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.796 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.796 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.796 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.796 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.796 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.796 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.796 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.796 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.796 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.796 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.796 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.796 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.796 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.796 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.796 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.796 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.796 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.796 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.796 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.796 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.796 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.796 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.796 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.796 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.796 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.796 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.796 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.796 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.796 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.796 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.796 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.796 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.796 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.796 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.796 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.796 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.796 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.796 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.796 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.796 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.796 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.796 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.796 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.796 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.796 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.796 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.796 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.796 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.796 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.796 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.796 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.796 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.796 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.796 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.796 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.796 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.796 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.796 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.796 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.796 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.796 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.796 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.796 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.796 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.796 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.796 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.796 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.796 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.797 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.797 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.797 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.797 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.797 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.797 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.797 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.797 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.797 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.797 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.797 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.797 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.797 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.797 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.797 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.797 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.797 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.797 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.797 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.797 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.797 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.797 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.797 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.797 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.797 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.797 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.797 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.797 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.797 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.797 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.797 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.797 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.797 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.797 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.797 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.797 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.797 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.797 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.797 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.797 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.797 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.797 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.797 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.797 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.797 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.797 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.797 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.797 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.797 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.797 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.797 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.797 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.797 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.797 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.797 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.797 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.797 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.797 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.797 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.797 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.797 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.797 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.797 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.797 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.797 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.797 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.797 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.797 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.797 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.797 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.797 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.797 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.797 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.797 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.797 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.797 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.797 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.797 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.797 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.797 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.797 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.797 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.797 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.797 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.797 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.797 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.797 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.797 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.797 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.797 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.797 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.797 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.797 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.797 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.797 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.797 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.797 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.797 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.797 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.797 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.797 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.797 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.797 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.797 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.797 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.797 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.797 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.797 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.797 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:32.797 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:32.797 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:32.797 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:32.797 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:32.797 nr_hugepages=1024 00:03:32.797 resv_hugepages=0 00:03:32.797 surplus_hugepages=0 00:03:32.797 anon_hugepages=0 00:03:32.797 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:32.798 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:32.798 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:32.798 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:32.798 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:32.798 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:32.798 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:32.798 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:32.798 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:32.798 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:32.798 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:32.798 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:32.798 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:32.798 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:32.798 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:32.798 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.798 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.798 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7956640 kB' 'MemAvailable: 9512368 kB' 'Buffers: 2436 kB' 'Cached: 1769900 kB' 'SwapCached: 0 kB' 'Active: 447124 kB' 'Inactive: 1441788 kB' 'Active(anon): 127040 kB' 'Inactive(anon): 0 kB' 'Active(file): 320084 kB' 'Inactive(file): 1441788 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 248 kB' 'Writeback: 0 kB' 'AnonPages: 118188 kB' 'Mapped: 47924 kB' 'Shmem: 10464 kB' 'KReclaimable: 61588 kB' 'Slab: 132688 kB' 'SReclaimable: 61588 kB' 'SUnreclaim: 71100 kB' 'KernelStack: 6224 kB' 'PageTables: 3688 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 336088 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54564 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 157548 kB' 'DirectMap2M: 5085184 kB' 'DirectMap1G: 9437184 kB' 00:03:32.798 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.798 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.798 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.798 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.798 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.798 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.798 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.798 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.798 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.798 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.798 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.798 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.798 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.798 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.798 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.798 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.798 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.798 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.798 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.798 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.798 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.798 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.798 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.798 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.798 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.798 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.798 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.798 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.798 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.798 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.798 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.798 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.798 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.798 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.798 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.798 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.798 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.798 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.798 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.798 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.798 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.798 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.798 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.798 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.798 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.798 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.798 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.798 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.798 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.798 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.798 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.798 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.798 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.798 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.798 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.798 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.798 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.798 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.798 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.798 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.798 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.798 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.798 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.798 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.798 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.798 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.798 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.798 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.798 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.798 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.798 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.798 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.798 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.798 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.798 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.798 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.798 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.798 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.798 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.798 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.798 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.798 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.798 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.798 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.799 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.799 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.799 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.799 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.799 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.799 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.799 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.799 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.799 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.799 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.799 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.799 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.799 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.799 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.799 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.799 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.799 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.799 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.799 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.799 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.799 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.799 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.799 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.799 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.799 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.799 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.799 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.799 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.799 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.799 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.799 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.799 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.799 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.799 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.799 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.799 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.799 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.799 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.799 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.799 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.799 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.799 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.799 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.799 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.799 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.799 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.799 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.799 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.799 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.799 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.799 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.799 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.799 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.799 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.799 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.799 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.799 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.799 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.799 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.799 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.799 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.799 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.799 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.799 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.799 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.799 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.799 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.799 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.799 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.799 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.799 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.799 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.799 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.799 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.799 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.799 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.799 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.799 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.799 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.799 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.799 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.799 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.799 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.799 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.799 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.799 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.799 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.799 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.799 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.800 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.800 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.800 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.800 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.800 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.800 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.800 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.800 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.800 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.800 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.800 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.800 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.800 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.800 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.800 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.800 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.800 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.800 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.800 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.800 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.800 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:03:32.800 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:32.800 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:32.800 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:32.800 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:03:32.800 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:32.800 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:32.800 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:32.800 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:32.800 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:32.800 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:32.800 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:32.800 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:32.800 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:03:32.800 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:32.800 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:32.800 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:32.800 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:32.800 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:32.800 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:32.800 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:32.800 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.800 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.800 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7956640 kB' 'MemUsed: 4285332 kB' 'SwapCached: 0 kB' 'Active: 447136 kB' 'Inactive: 1441788 kB' 'Active(anon): 127052 kB' 'Inactive(anon): 0 kB' 'Active(file): 320084 kB' 'Inactive(file): 1441788 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 248 kB' 'Writeback: 0 kB' 'FilePages: 1772336 kB' 'Mapped: 47924 kB' 'AnonPages: 118200 kB' 'Shmem: 10464 kB' 'KernelStack: 6224 kB' 'PageTables: 3688 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 61588 kB' 'Slab: 132676 kB' 'SReclaimable: 61588 kB' 'SUnreclaim: 71088 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:32.800 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.800 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.800 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.800 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.800 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.800 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.800 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.800 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.800 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.800 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.800 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.800 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.800 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.800 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.800 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.800 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.800 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.800 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.800 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.800 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.800 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.800 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.800 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.800 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.800 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.800 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.800 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.800 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.800 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.800 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.800 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.800 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.800 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.800 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.800 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.800 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.800 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.800 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.800 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.800 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.800 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.800 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.800 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.800 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.800 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.800 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.800 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.800 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.800 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.800 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.800 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.800 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.800 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.800 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.800 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.800 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.800 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.800 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.800 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.800 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.800 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.800 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.800 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.801 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.801 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.801 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.801 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.801 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.801 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.801 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.801 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.801 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.801 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.801 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.801 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.801 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.801 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.801 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.801 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.801 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.801 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.801 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.801 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.801 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.801 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.801 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.801 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.801 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.801 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.801 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.801 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.801 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.801 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.801 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.801 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.801 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.801 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.801 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.801 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.801 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.801 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.801 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.801 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.801 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.801 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.801 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.801 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.801 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.801 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.801 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.801 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.801 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.801 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.801 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.801 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.801 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.801 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.801 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.801 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.801 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.801 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.801 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.801 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.801 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.801 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.801 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.801 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.801 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.801 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.801 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.801 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.801 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.801 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.801 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.801 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.801 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.801 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.801 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.801 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.801 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.801 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.801 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:32.801 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.801 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.801 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.801 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:32.801 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:32.801 node0=1024 expecting 1024 00:03:32.801 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:32.801 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:32.801 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:32.801 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:32.801 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:32.801 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:32.801 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:03:32.801 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:03:32.801 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:03:32.801 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:32.801 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:33.060 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:33.060 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:33.060 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:33.060 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:03:33.060 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:03:33.324 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:03:33.324 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:33.324 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:33.324 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:33.324 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:33.324 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:33.324 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:33.324 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:33.324 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:33.324 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:33.324 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:33.324 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:33.324 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:33.324 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:33.324 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:33.324 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:33.324 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:33.324 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.324 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7955216 kB' 'MemAvailable: 9510944 kB' 'Buffers: 2436 kB' 'Cached: 1769900 kB' 'SwapCached: 0 kB' 'Active: 447452 kB' 'Inactive: 1441788 kB' 'Active(anon): 127368 kB' 'Inactive(anon): 0 kB' 'Active(file): 320084 kB' 'Inactive(file): 1441788 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 256 kB' 'Writeback: 0 kB' 'AnonPages: 118796 kB' 'Mapped: 48108 kB' 'Shmem: 10464 kB' 'KReclaimable: 61588 kB' 'Slab: 132648 kB' 'SReclaimable: 61588 kB' 'SUnreclaim: 71060 kB' 'KernelStack: 6308 kB' 'PageTables: 3760 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 336088 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54596 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 157548 kB' 'DirectMap2M: 5085184 kB' 'DirectMap1G: 9437184 kB' 00:03:33.324 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.324 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.324 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.324 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.324 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.324 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.324 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.324 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.324 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.324 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.324 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.324 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.324 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.324 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.324 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.324 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.324 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.324 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.324 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.324 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.324 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.324 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.324 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.324 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.324 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.324 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.324 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.324 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.324 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.324 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.324 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.324 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.324 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.324 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.324 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.324 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.324 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.325 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.325 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.325 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.325 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.325 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.325 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.325 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.325 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.325 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.325 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.325 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.325 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.325 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.325 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.325 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.325 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.325 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.325 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.325 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.325 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.325 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.325 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.325 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.325 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.325 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.325 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.325 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.325 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.325 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.325 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.325 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.325 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.325 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.325 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.325 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.325 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.325 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.325 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.325 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.325 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.325 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.325 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.325 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.325 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.325 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.325 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.325 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.325 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.325 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.325 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.325 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.325 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.325 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.325 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.325 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.325 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.325 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.325 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.325 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.325 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.325 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.325 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.325 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.325 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.325 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.325 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.325 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.325 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.325 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.325 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.325 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.325 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.325 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.325 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.325 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.325 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.325 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.325 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.325 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.325 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.325 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.325 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.325 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.325 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.325 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.325 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.325 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.325 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.325 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.325 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.325 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.325 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.325 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.325 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.325 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.325 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.325 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.325 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.325 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.325 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.325 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.325 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.325 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.325 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.325 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.325 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.325 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.325 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.325 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.325 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.325 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.325 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.325 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.325 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.325 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.326 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.326 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.326 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.326 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.326 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.326 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.326 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.326 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.326 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.326 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:33.326 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:33.326 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:33.326 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:33.326 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:33.326 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:33.326 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:33.326 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:33.326 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:33.326 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:33.326 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:33.326 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:33.326 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:33.326 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:33.326 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.326 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.326 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7955216 kB' 'MemAvailable: 9510944 kB' 'Buffers: 2436 kB' 'Cached: 1769900 kB' 'SwapCached: 0 kB' 'Active: 447300 kB' 'Inactive: 1441788 kB' 'Active(anon): 127216 kB' 'Inactive(anon): 0 kB' 'Active(file): 320084 kB' 'Inactive(file): 1441788 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 256 kB' 'Writeback: 0 kB' 'AnonPages: 118376 kB' 'Mapped: 47984 kB' 'Shmem: 10464 kB' 'KReclaimable: 61588 kB' 'Slab: 132648 kB' 'SReclaimable: 61588 kB' 'SUnreclaim: 71060 kB' 'KernelStack: 6268 kB' 'PageTables: 3792 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 336088 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54580 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 157548 kB' 'DirectMap2M: 5085184 kB' 'DirectMap1G: 9437184 kB' 00:03:33.326 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.326 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.326 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.326 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.326 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.326 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.326 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.326 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.326 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.326 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.326 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.326 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.326 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.326 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.326 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.326 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.326 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.326 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.326 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.326 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.326 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.326 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.326 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.326 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.326 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.326 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.326 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.326 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.326 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.326 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.326 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.326 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.326 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.326 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.326 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.326 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.326 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.326 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.326 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.326 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.326 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.326 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.326 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.326 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.326 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.326 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.326 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.326 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.326 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.326 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.326 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.326 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.326 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.326 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.326 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.326 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.326 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.326 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.326 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.326 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.326 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.326 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.326 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.326 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.326 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.326 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.326 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.326 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.326 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.326 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.326 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.326 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.327 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.327 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.327 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.327 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.327 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.327 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.327 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.327 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.327 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.327 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.327 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.327 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.327 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.327 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.327 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.327 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.327 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.327 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.327 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.327 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.327 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.327 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.327 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.327 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.327 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.327 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.327 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.327 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.327 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.327 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.327 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.327 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.327 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.327 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.327 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.327 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.327 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.327 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.327 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.327 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.327 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.327 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.327 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.327 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.327 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.327 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.327 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.327 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.327 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.327 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.327 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.327 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.327 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.327 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.327 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.327 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.327 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.327 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.327 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.327 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.327 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.327 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.327 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.327 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.327 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.327 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.327 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.327 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.327 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.327 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.327 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.327 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.327 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.327 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.327 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.327 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.327 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.327 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.327 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.327 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.327 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.327 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.327 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.327 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.327 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.327 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.327 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.327 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.327 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.327 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.327 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.327 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.327 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.327 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.327 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.327 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.327 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.327 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.327 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.327 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.327 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.327 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.327 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.327 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.327 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.327 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.327 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.327 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.327 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.327 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.327 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.327 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.327 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.327 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.328 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.328 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.328 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.328 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.328 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.328 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.328 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.328 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.328 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.328 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.328 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.328 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.328 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.328 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.328 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.328 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.328 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.328 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.328 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.328 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:33.328 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:33.328 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:33.328 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:33.328 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:33.328 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:33.328 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:33.328 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:33.328 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:33.328 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:33.328 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:33.328 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:33.328 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:33.328 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.328 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7954968 kB' 'MemAvailable: 9510696 kB' 'Buffers: 2436 kB' 'Cached: 1769900 kB' 'SwapCached: 0 kB' 'Active: 447392 kB' 'Inactive: 1441788 kB' 'Active(anon): 127308 kB' 'Inactive(anon): 0 kB' 'Active(file): 320084 kB' 'Inactive(file): 1441788 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 256 kB' 'Writeback: 0 kB' 'AnonPages: 118476 kB' 'Mapped: 47984 kB' 'Shmem: 10464 kB' 'KReclaimable: 61588 kB' 'Slab: 132648 kB' 'SReclaimable: 61588 kB' 'SUnreclaim: 71060 kB' 'KernelStack: 6284 kB' 'PageTables: 3844 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 336596 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54580 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 157548 kB' 'DirectMap2M: 5085184 kB' 'DirectMap1G: 9437184 kB' 00:03:33.328 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.328 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.328 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.328 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.328 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.328 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.328 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.328 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.328 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.328 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.328 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.328 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.328 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.328 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.328 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.328 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.328 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.328 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.328 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.328 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.328 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.328 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.328 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.328 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.328 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.328 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.328 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.328 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.328 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.328 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.328 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.328 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.328 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.328 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.328 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.328 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.328 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.328 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.328 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.328 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.328 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.328 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.328 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.328 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.328 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.328 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.328 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.328 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.328 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.328 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.328 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.328 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.328 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.328 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.328 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.328 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.328 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.328 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.328 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.328 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.328 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.328 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.328 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.328 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.328 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.328 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.328 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.328 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.328 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.329 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.329 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.329 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.329 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.329 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.329 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.329 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.329 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.329 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.329 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.329 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.329 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.329 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.329 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.329 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.329 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.329 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.329 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.329 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.329 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.329 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.329 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.329 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.329 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.329 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.329 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.329 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.329 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.329 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.329 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.329 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.329 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.329 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.329 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.329 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.329 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.329 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.329 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.329 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.329 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.329 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.329 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.329 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.329 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.329 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.329 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.329 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.329 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.329 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.329 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.329 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.329 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.329 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.329 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.329 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.329 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.329 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.329 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.329 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.329 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.329 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.329 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.329 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.329 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.329 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.329 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.329 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.329 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.329 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.329 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.329 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.329 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.329 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.329 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.329 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.329 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.329 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.329 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.329 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.329 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.329 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.329 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.329 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.329 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.329 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.329 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.329 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.329 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.329 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.329 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.329 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.329 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.330 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.330 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.330 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.330 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.330 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.330 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.330 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.330 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.330 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.330 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.330 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.330 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.330 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.330 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.330 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.330 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.330 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.330 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.330 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.330 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.330 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.330 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.330 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.330 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.330 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.330 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.330 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.330 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.330 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.330 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.330 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.330 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.330 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.330 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.330 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.330 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.330 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.330 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.330 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.330 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.330 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:33.330 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:33.330 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:33.330 nr_hugepages=1024 00:03:33.330 resv_hugepages=0 00:03:33.330 surplus_hugepages=0 00:03:33.330 anon_hugepages=0 00:03:33.330 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:33.330 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:33.330 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:33.330 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:33.330 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:33.330 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:33.330 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:33.330 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:33.330 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:33.330 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:33.330 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:33.330 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:33.330 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:33.330 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:33.330 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:33.330 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:33.330 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:33.330 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.330 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7955280 kB' 'MemAvailable: 9511004 kB' 'Buffers: 2436 kB' 'Cached: 1769896 kB' 'SwapCached: 0 kB' 'Active: 447496 kB' 'Inactive: 1441784 kB' 'Active(anon): 127412 kB' 'Inactive(anon): 0 kB' 'Active(file): 320084 kB' 'Inactive(file): 1441784 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 256 kB' 'Writeback: 0 kB' 'AnonPages: 118528 kB' 'Mapped: 47984 kB' 'Shmem: 10464 kB' 'KReclaimable: 61588 kB' 'Slab: 132644 kB' 'SReclaimable: 61588 kB' 'SUnreclaim: 71056 kB' 'KernelStack: 6236 kB' 'PageTables: 3700 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 336088 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54564 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 157548 kB' 'DirectMap2M: 5085184 kB' 'DirectMap1G: 9437184 kB' 00:03:33.330 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.330 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.330 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.330 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.330 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.330 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.330 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.330 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.330 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.330 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.330 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.330 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.330 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.330 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.330 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.330 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.330 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.330 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.330 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.330 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.330 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.330 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.330 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.330 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.330 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.330 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.330 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.330 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.330 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.330 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.330 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.330 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.330 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.330 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.330 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.330 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.330 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.330 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.330 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.331 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.331 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.331 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.331 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.331 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.331 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.331 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.331 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.331 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.331 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.331 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.331 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.331 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.331 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.331 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.331 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.331 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.331 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.331 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.331 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.331 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.331 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.331 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.331 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.331 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.331 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.331 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.331 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.331 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.331 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.331 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.331 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.331 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.331 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.331 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.331 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.331 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.331 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.331 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.331 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.331 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.331 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.331 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.331 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.331 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.331 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.331 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.331 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.331 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.331 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.331 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.331 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.331 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.331 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.331 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.331 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.331 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.331 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.331 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.331 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.331 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.331 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.331 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.331 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.331 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.331 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.331 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.331 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.331 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.331 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.331 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.331 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.331 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.331 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.331 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.331 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.331 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.331 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.331 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.331 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.331 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.331 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.331 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.331 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.331 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.331 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.331 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.331 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.331 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.331 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.331 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.331 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.331 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.331 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.331 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.331 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.331 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.331 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.331 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.331 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.331 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.331 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.331 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.331 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.331 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.331 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.331 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.331 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.331 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.331 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.331 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.331 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.331 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.331 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.332 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.332 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.332 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.332 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.332 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.332 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.332 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.332 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.332 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.332 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.332 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.332 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.332 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.332 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.332 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.332 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.332 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.332 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.332 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.332 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.332 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.332 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.332 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.332 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.332 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.332 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.332 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.332 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.332 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.332 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.332 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.332 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.332 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.332 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.332 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.332 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.332 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.332 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.332 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.332 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.332 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:33.332 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:03:33.332 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:33.332 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:33.332 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:33.332 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:03:33.332 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:33.332 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:33.332 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:33.332 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:33.332 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:33.332 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:33.332 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:33.332 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:33.332 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:03:33.332 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:33.332 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:33.332 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:33.332 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:33.332 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:33.332 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:33.332 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:33.332 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.332 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7955280 kB' 'MemUsed: 4286692 kB' 'SwapCached: 0 kB' 'Active: 447388 kB' 'Inactive: 1441788 kB' 'Active(anon): 127304 kB' 'Inactive(anon): 0 kB' 'Active(file): 320084 kB' 'Inactive(file): 1441788 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 256 kB' 'Writeback: 0 kB' 'FilePages: 1772336 kB' 'Mapped: 47984 kB' 'AnonPages: 118412 kB' 'Shmem: 10464 kB' 'KernelStack: 6284 kB' 'PageTables: 3824 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 61588 kB' 'Slab: 132644 kB' 'SReclaimable: 61588 kB' 'SUnreclaim: 71056 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:33.332 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.332 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.332 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.332 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.332 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.332 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.332 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.332 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.332 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.332 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.332 01:58:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.332 01:58:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.332 01:58:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.332 01:58:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.332 01:58:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.332 01:58:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.332 01:58:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.332 01:58:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.332 01:58:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.332 01:58:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.332 01:58:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.332 01:58:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.332 01:58:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.332 01:58:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.332 01:58:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.332 01:58:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.332 01:58:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.332 01:58:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.332 01:58:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.332 01:58:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.332 01:58:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.332 01:58:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.332 01:58:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.332 01:58:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.332 01:58:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.332 01:58:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.332 01:58:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.332 01:58:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.332 01:58:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.332 01:58:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.332 01:58:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.332 01:58:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.333 01:58:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.333 01:58:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.333 01:58:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.333 01:58:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.333 01:58:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.333 01:58:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.333 01:58:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.333 01:58:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.333 01:58:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.333 01:58:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.333 01:58:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.333 01:58:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.333 01:58:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.333 01:58:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.333 01:58:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.333 01:58:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.333 01:58:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.333 01:58:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.333 01:58:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.333 01:58:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.333 01:58:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.333 01:58:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.333 01:58:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.333 01:58:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.333 01:58:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.333 01:58:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.333 01:58:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.333 01:58:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.333 01:58:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.333 01:58:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.333 01:58:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.333 01:58:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.333 01:58:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.333 01:58:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.333 01:58:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.333 01:58:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.333 01:58:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.333 01:58:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.333 01:58:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.333 01:58:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.333 01:58:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.333 01:58:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.333 01:58:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.333 01:58:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.333 01:58:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.333 01:58:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.333 01:58:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.333 01:58:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.333 01:58:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.333 01:58:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.333 01:58:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.333 01:58:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.333 01:58:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.333 01:58:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.333 01:58:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.333 01:58:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.333 01:58:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.333 01:58:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.333 01:58:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.333 01:58:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.333 01:58:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.333 01:58:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.333 01:58:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.333 01:58:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.333 01:58:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.333 01:58:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.333 01:58:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.333 01:58:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.333 01:58:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.333 01:58:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.333 01:58:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.333 01:58:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.333 01:58:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.333 01:58:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.333 01:58:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.333 01:58:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.333 01:58:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.333 01:58:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.333 01:58:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.333 01:58:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.333 01:58:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.333 01:58:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.333 01:58:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.333 01:58:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.333 01:58:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.333 01:58:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.333 01:58:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.333 01:58:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.333 01:58:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.333 01:58:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.333 01:58:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.333 01:58:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.333 01:58:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.333 01:58:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.333 01:58:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.333 01:58:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.333 01:58:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.333 01:58:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.333 01:58:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.333 01:58:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.333 01:58:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:33.333 01:58:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:33.333 01:58:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:33.333 01:58:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:33.333 01:58:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:33.333 01:58:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:33.333 node0=1024 expecting 1024 00:03:33.333 ************************************ 00:03:33.333 END TEST no_shrink_alloc 00:03:33.333 ************************************ 00:03:33.333 01:58:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:33.333 01:58:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:33.333 01:58:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:33.333 01:58:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:33.333 01:58:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:33.333 01:58:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:33.334 00:03:33.334 real 0m1.183s 00:03:33.334 user 0m0.536s 00:03:33.334 sys 0m0.673s 00:03:33.334 01:58:42 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:33.334 01:58:42 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:33.334 01:58:42 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:33.334 01:58:42 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:03:33.334 01:58:42 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:33.334 01:58:42 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:33.334 01:58:42 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:33.334 01:58:42 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:33.334 01:58:42 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:33.334 01:58:42 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:33.334 01:58:42 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:33.334 01:58:42 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:33.334 00:03:33.334 real 0m5.209s 00:03:33.334 user 0m2.305s 00:03:33.334 sys 0m2.782s 00:03:33.334 01:58:42 setup.sh.hugepages -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:33.334 01:58:42 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:33.334 ************************************ 00:03:33.334 END TEST hugepages 00:03:33.334 ************************************ 00:03:33.592 01:58:42 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:03:33.592 01:58:42 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:03:33.592 01:58:42 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:33.592 01:58:42 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:33.592 01:58:42 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:33.592 ************************************ 00:03:33.592 START TEST driver 00:03:33.592 ************************************ 00:03:33.592 01:58:42 setup.sh.driver -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:03:33.592 * Looking for test storage... 00:03:33.592 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:03:33.592 01:58:42 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:03:33.592 01:58:42 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:33.592 01:58:42 setup.sh.driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:34.158 01:58:42 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:03:34.158 01:58:42 setup.sh.driver -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:34.158 01:58:42 setup.sh.driver -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:34.158 01:58:42 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:03:34.158 ************************************ 00:03:34.158 START TEST guess_driver 00:03:34.158 ************************************ 00:03:34.158 01:58:42 setup.sh.driver.guess_driver -- common/autotest_common.sh@1123 -- # guess_driver 00:03:34.158 01:58:42 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:03:34.158 01:58:42 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:03:34.158 01:58:42 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:03:34.158 01:58:42 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:03:34.158 01:58:42 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:03:34.158 01:58:42 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:03:34.158 01:58:42 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:03:34.158 01:58:42 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:03:34.158 01:58:42 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:03:34.158 01:58:42 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # [[ '' == Y ]] 00:03:34.158 01:58:42 setup.sh.driver.guess_driver -- setup/driver.sh@32 -- # return 1 00:03:34.158 01:58:42 setup.sh.driver.guess_driver -- setup/driver.sh@38 -- # uio 00:03:34.158 01:58:42 setup.sh.driver.guess_driver -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:03:34.158 01:58:42 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod uio_pci_generic 00:03:34.158 01:58:42 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep uio_pci_generic 00:03:34.158 01:58:42 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:03:34.158 01:58:42 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio.ko.xz 00:03:34.158 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio_pci_generic.ko.xz == *\.\k\o* ]] 00:03:34.158 01:58:42 setup.sh.driver.guess_driver -- setup/driver.sh@39 -- # echo uio_pci_generic 00:03:34.158 Looking for driver=uio_pci_generic 00:03:34.158 01:58:42 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:03:34.158 01:58:42 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:03:34.158 01:58:42 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:03:34.158 01:58:42 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:34.158 01:58:42 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:03:34.158 01:58:42 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:03:34.158 01:58:42 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:35.094 01:58:43 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:03:35.094 01:58:43 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # continue 00:03:35.094 01:58:43 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:35.094 01:58:43 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:35.094 01:58:43 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:03:35.094 01:58:43 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:35.094 01:58:43 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:35.094 01:58:43 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:03:35.094 01:58:43 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:35.094 01:58:43 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:03:35.094 01:58:43 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:03:35.094 01:58:43 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:35.094 01:58:43 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:35.662 00:03:35.662 real 0m1.540s 00:03:35.662 user 0m0.567s 00:03:35.662 sys 0m0.967s 00:03:35.662 01:58:44 setup.sh.driver.guess_driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:35.662 01:58:44 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:03:35.662 ************************************ 00:03:35.662 END TEST guess_driver 00:03:35.662 ************************************ 00:03:35.662 01:58:44 setup.sh.driver -- common/autotest_common.sh@1142 -- # return 0 00:03:35.662 00:03:35.662 real 0m2.274s 00:03:35.662 user 0m0.807s 00:03:35.662 sys 0m1.510s 00:03:35.662 01:58:44 setup.sh.driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:35.662 01:58:44 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:03:35.662 ************************************ 00:03:35.662 END TEST driver 00:03:35.662 ************************************ 00:03:35.920 01:58:44 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:03:35.921 01:58:44 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:03:35.921 01:58:44 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:35.921 01:58:44 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:35.921 01:58:44 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:35.921 ************************************ 00:03:35.921 START TEST devices 00:03:35.921 ************************************ 00:03:35.921 01:58:44 setup.sh.devices -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:03:35.921 * Looking for test storage... 00:03:35.921 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:03:35.921 01:58:44 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:03:35.921 01:58:44 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:03:35.921 01:58:44 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:35.921 01:58:44 setup.sh.devices -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:36.858 01:58:45 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:03:36.858 01:58:45 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:36.858 01:58:45 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:36.858 01:58:45 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:36.858 01:58:45 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:36.858 01:58:45 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:36.858 01:58:45 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:36.858 01:58:45 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:36.858 01:58:45 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:36.858 01:58:45 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:36.858 01:58:45 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n2 00:03:36.858 01:58:45 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n2 00:03:36.858 01:58:45 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:03:36.858 01:58:45 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:36.858 01:58:45 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:36.858 01:58:45 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n3 00:03:36.858 01:58:45 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n3 00:03:36.858 01:58:45 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:03:36.858 01:58:45 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:36.858 01:58:45 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:36.858 01:58:45 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:03:36.858 01:58:45 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:03:36.858 01:58:45 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:03:36.858 01:58:45 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:36.858 01:58:45 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:03:36.858 01:58:45 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:03:36.858 01:58:45 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:03:36.858 01:58:45 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:03:36.858 01:58:45 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:03:36.858 01:58:45 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:36.858 01:58:45 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:03:36.858 01:58:45 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:03:36.858 01:58:45 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:03:36.858 01:58:45 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:03:36.858 01:58:45 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:03:36.858 01:58:45 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:03:36.858 01:58:45 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:03:36.858 No valid GPT data, bailing 00:03:36.858 01:58:45 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:36.858 01:58:45 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:03:36.858 01:58:45 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:03:36.858 01:58:45 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:03:36.858 01:58:45 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:03:36.858 01:58:45 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:03:36.858 01:58:45 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:03:36.858 01:58:45 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:03:36.858 01:58:45 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:36.858 01:58:45 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:03:36.858 01:58:45 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:36.858 01:58:45 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n2 00:03:36.858 01:58:45 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:03:36.858 01:58:45 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:03:36.858 01:58:45 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:03:36.858 01:58:45 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n2 00:03:36.858 01:58:45 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:03:36.858 01:58:45 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:03:36.858 No valid GPT data, bailing 00:03:36.858 01:58:45 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:03:36.858 01:58:45 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:03:36.858 01:58:45 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:03:36.858 01:58:45 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n2 00:03:36.858 01:58:45 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n2 00:03:36.858 01:58:45 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n2 ]] 00:03:36.858 01:58:45 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:03:36.858 01:58:45 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:03:36.858 01:58:45 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:36.858 01:58:45 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:03:36.858 01:58:45 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:36.858 01:58:45 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n3 00:03:36.858 01:58:45 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:03:36.858 01:58:45 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:03:36.858 01:58:45 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:03:36.858 01:58:45 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n3 00:03:36.858 01:58:45 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:03:36.858 01:58:45 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:03:36.858 No valid GPT data, bailing 00:03:36.858 01:58:45 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:03:36.858 01:58:45 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:03:36.858 01:58:45 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:03:36.858 01:58:45 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n3 00:03:36.858 01:58:45 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n3 00:03:36.858 01:58:45 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n3 ]] 00:03:36.858 01:58:45 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:03:36.859 01:58:45 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:03:36.859 01:58:45 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:36.859 01:58:45 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:03:36.859 01:58:45 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:36.859 01:58:45 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1n1 00:03:36.859 01:58:45 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1 00:03:36.859 01:58:45 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:10.0 00:03:36.859 01:58:45 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:03:36.859 01:58:45 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme1n1 00:03:36.859 01:58:45 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:03:36.859 01:58:45 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:03:36.859 No valid GPT data, bailing 00:03:37.118 01:58:45 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:03:37.118 01:58:45 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:03:37.118 01:58:45 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:03:37.118 01:58:45 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n1 00:03:37.118 01:58:45 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme1n1 00:03:37.118 01:58:45 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n1 ]] 00:03:37.118 01:58:45 setup.sh.devices -- setup/common.sh@80 -- # echo 5368709120 00:03:37.118 01:58:45 setup.sh.devices -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:03:37.118 01:58:45 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:37.118 01:58:45 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:10.0 00:03:37.118 01:58:45 setup.sh.devices -- setup/devices.sh@209 -- # (( 4 > 0 )) 00:03:37.118 01:58:45 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:03:37.118 01:58:45 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:03:37.118 01:58:45 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:37.118 01:58:45 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:37.118 01:58:45 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:37.118 ************************************ 00:03:37.118 START TEST nvme_mount 00:03:37.118 ************************************ 00:03:37.118 01:58:45 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1123 -- # nvme_mount 00:03:37.118 01:58:45 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:03:37.118 01:58:45 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:03:37.118 01:58:45 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:37.118 01:58:45 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:03:37.118 01:58:45 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:03:37.118 01:58:45 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:37.118 01:58:45 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:03:37.118 01:58:45 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:03:37.118 01:58:45 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:37.118 01:58:45 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:03:37.118 01:58:45 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:03:37.118 01:58:45 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:03:37.118 01:58:45 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:37.118 01:58:45 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:37.118 01:58:45 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:37.118 01:58:45 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:37.118 01:58:45 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:03:37.118 01:58:45 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:37.118 01:58:45 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:03:38.055 Creating new GPT entries in memory. 00:03:38.055 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:38.055 other utilities. 00:03:38.055 01:58:46 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:03:38.055 01:58:46 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:38.055 01:58:46 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:38.055 01:58:46 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:38.055 01:58:46 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:03:38.991 Creating new GPT entries in memory. 00:03:38.991 The operation has completed successfully. 00:03:38.991 01:58:47 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:38.991 01:58:47 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:38.991 01:58:47 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 56952 00:03:38.991 01:58:47 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:38.991 01:58:47 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:03:38.991 01:58:47 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:38.991 01:58:47 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:03:38.991 01:58:47 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:03:39.251 01:58:47 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:39.251 01:58:47 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:00:11.0 nvme0n1:nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:03:39.251 01:58:47 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:03:39.251 01:58:47 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:03:39.251 01:58:47 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:39.251 01:58:47 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:03:39.251 01:58:47 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:39.251 01:58:47 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:39.251 01:58:47 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:03:39.251 01:58:47 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:39.251 01:58:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:39.251 01:58:47 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:03:39.251 01:58:47 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:39.251 01:58:47 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:39.251 01:58:47 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:39.251 01:58:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:39.251 01:58:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:03:39.251 01:58:48 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:39.251 01:58:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:39.251 01:58:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:39.251 01:58:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:39.510 01:58:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:39.510 01:58:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:39.510 01:58:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:39.510 01:58:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:39.769 01:58:48 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:39.769 01:58:48 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:03:39.769 01:58:48 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:39.769 01:58:48 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:39.769 01:58:48 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:03:39.769 01:58:48 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:03:39.769 01:58:48 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:39.769 01:58:48 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:39.769 01:58:48 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:39.769 01:58:48 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:03:39.769 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:39.769 01:58:48 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:39.769 01:58:48 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:40.028 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:03:40.028 /dev/nvme0n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:03:40.028 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:03:40.028 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:03:40.028 01:58:48 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:03:40.028 01:58:48 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:03:40.028 01:58:48 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:40.028 01:58:48 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:03:40.028 01:58:48 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:03:40.028 01:58:48 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:40.028 01:58:48 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:00:11.0 nvme0n1:nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:03:40.028 01:58:48 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:03:40.028 01:58:48 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:03:40.028 01:58:48 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:40.028 01:58:48 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:03:40.028 01:58:48 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:40.028 01:58:48 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:40.028 01:58:48 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:03:40.028 01:58:48 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:40.028 01:58:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:40.028 01:58:48 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:03:40.028 01:58:48 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:40.028 01:58:48 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:40.028 01:58:48 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:40.287 01:58:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:40.287 01:58:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:03:40.287 01:58:48 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:40.287 01:58:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:40.287 01:58:48 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:40.287 01:58:48 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:40.546 01:58:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:40.546 01:58:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:40.546 01:58:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:40.546 01:58:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:40.546 01:58:49 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:40.546 01:58:49 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:03:40.546 01:58:49 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:40.546 01:58:49 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:40.546 01:58:49 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:03:40.546 01:58:49 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:40.546 01:58:49 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:00:11.0 data@nvme0n1 '' '' 00:03:40.546 01:58:49 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:03:40.546 01:58:49 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:03:40.547 01:58:49 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:03:40.547 01:58:49 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:03:40.547 01:58:49 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:40.547 01:58:49 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:03:40.547 01:58:49 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:40.547 01:58:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:40.547 01:58:49 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:03:40.547 01:58:49 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:40.547 01:58:49 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:40.547 01:58:49 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:40.805 01:58:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:40.806 01:58:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:03:40.806 01:58:49 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:40.806 01:58:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:40.806 01:58:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:40.806 01:58:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:41.064 01:58:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:41.064 01:58:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:41.064 01:58:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:41.064 01:58:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:41.323 01:58:49 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:41.323 01:58:49 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:03:41.323 01:58:49 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:03:41.323 01:58:49 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:03:41.323 01:58:49 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:41.323 01:58:49 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:41.323 01:58:49 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:41.323 01:58:49 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:41.323 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:41.323 00:03:41.323 real 0m4.214s 00:03:41.323 user 0m0.747s 00:03:41.323 sys 0m1.180s 00:03:41.323 01:58:49 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:41.323 01:58:49 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:03:41.323 ************************************ 00:03:41.323 END TEST nvme_mount 00:03:41.323 ************************************ 00:03:41.323 01:58:49 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:03:41.323 01:58:49 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:03:41.323 01:58:49 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:41.323 01:58:49 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:41.323 01:58:49 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:41.323 ************************************ 00:03:41.323 START TEST dm_mount 00:03:41.323 ************************************ 00:03:41.323 01:58:49 setup.sh.devices.dm_mount -- common/autotest_common.sh@1123 -- # dm_mount 00:03:41.323 01:58:49 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:03:41.323 01:58:49 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:03:41.323 01:58:49 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:03:41.323 01:58:49 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:03:41.323 01:58:49 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:41.323 01:58:49 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:03:41.323 01:58:49 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:03:41.323 01:58:49 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:41.323 01:58:49 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:03:41.323 01:58:49 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:03:41.323 01:58:49 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:03:41.323 01:58:49 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:41.323 01:58:49 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:41.323 01:58:49 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:41.323 01:58:49 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:41.323 01:58:49 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:41.323 01:58:49 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:41.323 01:58:49 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:41.323 01:58:49 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:03:41.323 01:58:49 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:41.323 01:58:49 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:03:42.260 Creating new GPT entries in memory. 00:03:42.260 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:42.260 other utilities. 00:03:42.260 01:58:50 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:03:42.260 01:58:50 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:42.260 01:58:50 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:42.260 01:58:50 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:42.260 01:58:50 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:03:43.638 Creating new GPT entries in memory. 00:03:43.638 The operation has completed successfully. 00:03:43.638 01:58:52 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:43.638 01:58:52 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:43.638 01:58:52 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:43.638 01:58:52 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:43.638 01:58:52 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:264192:526335 00:03:44.575 The operation has completed successfully. 00:03:44.575 01:58:53 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:44.575 01:58:53 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:44.575 01:58:53 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 57388 00:03:44.575 01:58:53 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:03:44.575 01:58:53 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:03:44.575 01:58:53 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:03:44.575 01:58:53 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:03:44.575 01:58:53 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:03:44.575 01:58:53 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:44.575 01:58:53 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:03:44.575 01:58:53 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:44.575 01:58:53 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:03:44.575 01:58:53 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:03:44.575 01:58:53 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:03:44.575 01:58:53 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:03:44.575 01:58:53 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:03:44.575 01:58:53 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:03:44.575 01:58:53 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:03:44.575 01:58:53 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:03:44.575 01:58:53 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:44.575 01:58:53 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:03:44.575 01:58:53 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:03:44.575 01:58:53 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:00:11.0 nvme0n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:03:44.575 01:58:53 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:03:44.575 01:58:53 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:03:44.575 01:58:53 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:03:44.575 01:58:53 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:03:44.575 01:58:53 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:03:44.575 01:58:53 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:03:44.575 01:58:53 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:03:44.575 01:58:53 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:03:44.575 01:58:53 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:44.576 01:58:53 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:03:44.576 01:58:53 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:03:44.576 01:58:53 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:44.576 01:58:53 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:44.834 01:58:53 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:44.834 01:58:53 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:03:44.834 01:58:53 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:03:44.834 01:58:53 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:44.834 01:58:53 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:44.834 01:58:53 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:44.834 01:58:53 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:44.834 01:58:53 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.093 01:58:53 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:45.093 01:58:53 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.093 01:58:53 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:45.093 01:58:53 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:03:45.093 01:58:53 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:03:45.093 01:58:53 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:03:45.093 01:58:53 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:03:45.093 01:58:53 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:03:45.093 01:58:53 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:00:11.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:03:45.093 01:58:53 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:03:45.093 01:58:53 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:03:45.093 01:58:53 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:03:45.093 01:58:53 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:03:45.093 01:58:53 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:03:45.093 01:58:53 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:03:45.093 01:58:53 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:03:45.093 01:58:53 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.093 01:58:53 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:03:45.093 01:58:53 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:03:45.093 01:58:53 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:45.093 01:58:53 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:45.352 01:58:53 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:45.352 01:58:53 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:03:45.352 01:58:53 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:03:45.352 01:58:53 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.352 01:58:53 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:45.352 01:58:53 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.352 01:58:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:45.352 01:58:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.611 01:58:54 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:03:45.611 01:58:54 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:45.611 01:58:54 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:45.611 01:58:54 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:03:45.611 01:58:54 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:03:45.611 01:58:54 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:03:45.611 01:58:54 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:03:45.611 01:58:54 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:03:45.611 01:58:54 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:03:45.611 01:58:54 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:45.611 01:58:54 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:03:45.611 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:45.611 01:58:54 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:03:45.611 01:58:54 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:03:45.611 00:03:45.611 real 0m4.358s 00:03:45.611 user 0m0.507s 00:03:45.611 sys 0m0.796s 00:03:45.611 01:58:54 setup.sh.devices.dm_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:45.611 01:58:54 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:03:45.611 ************************************ 00:03:45.611 END TEST dm_mount 00:03:45.611 ************************************ 00:03:45.611 01:58:54 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:03:45.611 01:58:54 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:03:45.611 01:58:54 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:03:45.611 01:58:54 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:45.611 01:58:54 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:45.611 01:58:54 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:03:45.611 01:58:54 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:45.611 01:58:54 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:45.870 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:03:45.870 /dev/nvme0n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:03:45.870 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:03:45.870 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:03:45.870 01:58:54 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:03:45.870 01:58:54 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:03:45.870 01:58:54 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:03:45.870 01:58:54 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:45.870 01:58:54 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:03:45.870 01:58:54 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:03:45.870 01:58:54 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:03:45.870 ************************************ 00:03:45.870 END TEST devices 00:03:45.870 ************************************ 00:03:45.870 00:03:45.870 real 0m10.187s 00:03:45.870 user 0m1.945s 00:03:45.870 sys 0m2.601s 00:03:45.870 01:58:54 setup.sh.devices -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:45.870 01:58:54 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:46.128 01:58:54 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:03:46.128 00:03:46.128 real 0m23.116s 00:03:46.128 user 0m7.381s 00:03:46.128 sys 0m9.915s 00:03:46.128 ************************************ 00:03:46.128 END TEST setup.sh 00:03:46.128 ************************************ 00:03:46.128 01:58:54 setup.sh -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:46.128 01:58:54 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:46.128 01:58:54 -- common/autotest_common.sh@1142 -- # return 0 00:03:46.128 01:58:54 -- spdk/autotest.sh@128 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:03:46.742 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:46.742 Hugepages 00:03:46.742 node hugesize free / total 00:03:46.742 node0 1048576kB 0 / 0 00:03:46.742 node0 2048kB 2048 / 2048 00:03:46.742 00:03:46.742 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:47.002 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:03:47.002 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:03:47.002 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme0 nvme0n1 nvme0n2 nvme0n3 00:03:47.002 01:58:55 -- spdk/autotest.sh@130 -- # uname -s 00:03:47.003 01:58:55 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:03:47.003 01:58:55 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:03:47.003 01:58:55 -- common/autotest_common.sh@1531 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:47.939 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:47.939 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:03:47.939 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:03:47.939 01:58:56 -- common/autotest_common.sh@1532 -- # sleep 1 00:03:48.875 01:58:57 -- common/autotest_common.sh@1533 -- # bdfs=() 00:03:48.875 01:58:57 -- common/autotest_common.sh@1533 -- # local bdfs 00:03:48.875 01:58:57 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:03:48.875 01:58:57 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:03:48.875 01:58:57 -- common/autotest_common.sh@1513 -- # bdfs=() 00:03:48.875 01:58:57 -- common/autotest_common.sh@1513 -- # local bdfs 00:03:48.875 01:58:57 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:48.875 01:58:57 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:03:48.875 01:58:57 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:03:49.134 01:58:57 -- common/autotest_common.sh@1515 -- # (( 2 == 0 )) 00:03:49.134 01:58:57 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:03:49.134 01:58:57 -- common/autotest_common.sh@1536 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:49.393 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:49.393 Waiting for block devices as requested 00:03:49.393 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:03:49.652 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:03:49.652 01:58:58 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:03:49.652 01:58:58 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:03:49.652 01:58:58 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:03:49.652 01:58:58 -- common/autotest_common.sh@1502 -- # grep 0000:00:10.0/nvme/nvme 00:03:49.652 01:58:58 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:03:49.652 01:58:58 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:03:49.652 01:58:58 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:03:49.652 01:58:58 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme1 00:03:49.652 01:58:58 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme1 00:03:49.652 01:58:58 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme1 ]] 00:03:49.652 01:58:58 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme1 00:03:49.652 01:58:58 -- common/autotest_common.sh@1545 -- # grep oacs 00:03:49.652 01:58:58 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:03:49.652 01:58:58 -- common/autotest_common.sh@1545 -- # oacs=' 0x12a' 00:03:49.652 01:58:58 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:03:49.652 01:58:58 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:03:49.652 01:58:58 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme1 00:03:49.652 01:58:58 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:03:49.652 01:58:58 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:03:49.652 01:58:58 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:03:49.652 01:58:58 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:03:49.652 01:58:58 -- common/autotest_common.sh@1557 -- # continue 00:03:49.652 01:58:58 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:03:49.652 01:58:58 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:03:49.652 01:58:58 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:03:49.652 01:58:58 -- common/autotest_common.sh@1502 -- # grep 0000:00:11.0/nvme/nvme 00:03:49.652 01:58:58 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:03:49.652 01:58:58 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:03:49.652 01:58:58 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:03:49.652 01:58:58 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:03:49.652 01:58:58 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:03:49.652 01:58:58 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:03:49.652 01:58:58 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:03:49.652 01:58:58 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:03:49.652 01:58:58 -- common/autotest_common.sh@1545 -- # grep oacs 00:03:49.652 01:58:58 -- common/autotest_common.sh@1545 -- # oacs=' 0x12a' 00:03:49.652 01:58:58 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:03:49.652 01:58:58 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:03:49.652 01:58:58 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:03:49.652 01:58:58 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:03:49.652 01:58:58 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:03:49.652 01:58:58 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:03:49.652 01:58:58 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:03:49.652 01:58:58 -- common/autotest_common.sh@1557 -- # continue 00:03:49.652 01:58:58 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:03:49.652 01:58:58 -- common/autotest_common.sh@728 -- # xtrace_disable 00:03:49.652 01:58:58 -- common/autotest_common.sh@10 -- # set +x 00:03:49.652 01:58:58 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:03:49.652 01:58:58 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:49.652 01:58:58 -- common/autotest_common.sh@10 -- # set +x 00:03:49.652 01:58:58 -- spdk/autotest.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:50.588 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:50.588 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:03:50.588 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:03:50.588 01:58:59 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:03:50.588 01:58:59 -- common/autotest_common.sh@728 -- # xtrace_disable 00:03:50.588 01:58:59 -- common/autotest_common.sh@10 -- # set +x 00:03:50.846 01:58:59 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:03:50.846 01:58:59 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:03:50.846 01:58:59 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:03:50.846 01:58:59 -- common/autotest_common.sh@1577 -- # bdfs=() 00:03:50.846 01:58:59 -- common/autotest_common.sh@1577 -- # local bdfs 00:03:50.846 01:58:59 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:03:50.846 01:58:59 -- common/autotest_common.sh@1513 -- # bdfs=() 00:03:50.846 01:58:59 -- common/autotest_common.sh@1513 -- # local bdfs 00:03:50.846 01:58:59 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:50.846 01:58:59 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:03:50.846 01:58:59 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:03:50.846 01:58:59 -- common/autotest_common.sh@1515 -- # (( 2 == 0 )) 00:03:50.846 01:58:59 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:03:50.846 01:58:59 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:03:50.846 01:58:59 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:03:50.847 01:58:59 -- common/autotest_common.sh@1580 -- # device=0x0010 00:03:50.847 01:58:59 -- common/autotest_common.sh@1581 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:03:50.847 01:58:59 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:03:50.847 01:58:59 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:03:50.847 01:58:59 -- common/autotest_common.sh@1580 -- # device=0x0010 00:03:50.847 01:58:59 -- common/autotest_common.sh@1581 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:03:50.847 01:58:59 -- common/autotest_common.sh@1586 -- # printf '%s\n' 00:03:50.847 01:58:59 -- common/autotest_common.sh@1592 -- # [[ -z '' ]] 00:03:50.847 01:58:59 -- common/autotest_common.sh@1593 -- # return 0 00:03:50.847 01:58:59 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:03:50.847 01:58:59 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:03:50.847 01:58:59 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:03:50.847 01:58:59 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:03:50.847 01:58:59 -- spdk/autotest.sh@162 -- # timing_enter lib 00:03:50.847 01:58:59 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:50.847 01:58:59 -- common/autotest_common.sh@10 -- # set +x 00:03:50.847 01:58:59 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:03:50.847 01:58:59 -- spdk/autotest.sh@168 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:03:50.847 01:58:59 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:50.847 01:58:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:50.847 01:58:59 -- common/autotest_common.sh@10 -- # set +x 00:03:50.847 ************************************ 00:03:50.847 START TEST env 00:03:50.847 ************************************ 00:03:50.847 01:58:59 env -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:03:50.847 * Looking for test storage... 00:03:50.847 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:03:50.847 01:58:59 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:03:50.847 01:58:59 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:50.847 01:58:59 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:50.847 01:58:59 env -- common/autotest_common.sh@10 -- # set +x 00:03:50.847 ************************************ 00:03:50.847 START TEST env_memory 00:03:50.847 ************************************ 00:03:50.847 01:58:59 env.env_memory -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:03:50.847 00:03:50.847 00:03:50.847 CUnit - A unit testing framework for C - Version 2.1-3 00:03:50.847 http://cunit.sourceforge.net/ 00:03:50.847 00:03:50.847 00:03:50.847 Suite: memory 00:03:51.106 Test: alloc and free memory map ...[2024-07-23 01:58:59.651962] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:03:51.106 passed 00:03:51.106 Test: mem map translation ...[2024-07-23 01:58:59.713875] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:03:51.106 [2024-07-23 01:58:59.714585] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:03:51.106 [2024-07-23 01:58:59.715198] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:03:51.106 [2024-07-23 01:58:59.715653] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:03:51.106 passed 00:03:51.106 Test: mem map registration ...[2024-07-23 01:58:59.814776] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:03:51.106 [2024-07-23 01:58:59.815480] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:03:51.106 passed 00:03:51.365 Test: mem map adjacent registrations ...passed 00:03:51.365 00:03:51.365 Run Summary: Type Total Ran Passed Failed Inactive 00:03:51.365 suites 1 1 n/a 0 0 00:03:51.365 tests 4 4 4 0 0 00:03:51.365 asserts 152 152 152 0 n/a 00:03:51.365 00:03:51.365 Elapsed time = 0.342 seconds 00:03:51.365 00:03:51.365 real 0m0.388s 00:03:51.365 user 0m0.348s 00:03:51.365 sys 0m0.027s 00:03:51.365 01:58:59 env.env_memory -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:51.365 01:58:59 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:03:51.365 ************************************ 00:03:51.365 END TEST env_memory 00:03:51.365 ************************************ 00:03:51.365 01:59:00 env -- common/autotest_common.sh@1142 -- # return 0 00:03:51.365 01:59:00 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:03:51.365 01:59:00 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:51.365 01:59:00 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:51.365 01:59:00 env -- common/autotest_common.sh@10 -- # set +x 00:03:51.365 ************************************ 00:03:51.365 START TEST env_vtophys 00:03:51.365 ************************************ 00:03:51.365 01:59:00 env.env_vtophys -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:03:51.365 EAL: lib.eal log level changed from notice to debug 00:03:51.365 EAL: Detected lcore 0 as core 0 on socket 0 00:03:51.365 EAL: Detected lcore 1 as core 0 on socket 0 00:03:51.365 EAL: Detected lcore 2 as core 0 on socket 0 00:03:51.365 EAL: Detected lcore 3 as core 0 on socket 0 00:03:51.365 EAL: Detected lcore 4 as core 0 on socket 0 00:03:51.365 EAL: Detected lcore 5 as core 0 on socket 0 00:03:51.365 EAL: Detected lcore 6 as core 0 on socket 0 00:03:51.365 EAL: Detected lcore 7 as core 0 on socket 0 00:03:51.365 EAL: Detected lcore 8 as core 0 on socket 0 00:03:51.365 EAL: Detected lcore 9 as core 0 on socket 0 00:03:51.365 EAL: Maximum logical cores by configuration: 128 00:03:51.365 EAL: Detected CPU lcores: 10 00:03:51.365 EAL: Detected NUMA nodes: 1 00:03:51.365 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:03:51.365 EAL: Detected shared linkage of DPDK 00:03:51.365 EAL: No shared files mode enabled, IPC will be disabled 00:03:51.365 EAL: Selected IOVA mode 'PA' 00:03:51.365 EAL: Probing VFIO support... 00:03:51.365 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:03:51.365 EAL: VFIO modules not loaded, skipping VFIO support... 00:03:51.365 EAL: Ask a virtual area of 0x2e000 bytes 00:03:51.365 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:03:51.365 EAL: Setting up physically contiguous memory... 00:03:51.365 EAL: Setting maximum number of open files to 524288 00:03:51.365 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:03:51.365 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:03:51.365 EAL: Ask a virtual area of 0x61000 bytes 00:03:51.365 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:03:51.365 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:51.365 EAL: Ask a virtual area of 0x400000000 bytes 00:03:51.365 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:03:51.365 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:03:51.365 EAL: Ask a virtual area of 0x61000 bytes 00:03:51.365 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:03:51.365 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:51.365 EAL: Ask a virtual area of 0x400000000 bytes 00:03:51.365 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:03:51.365 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:03:51.365 EAL: Ask a virtual area of 0x61000 bytes 00:03:51.365 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:03:51.366 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:51.366 EAL: Ask a virtual area of 0x400000000 bytes 00:03:51.366 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:03:51.366 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:03:51.366 EAL: Ask a virtual area of 0x61000 bytes 00:03:51.366 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:03:51.366 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:51.366 EAL: Ask a virtual area of 0x400000000 bytes 00:03:51.366 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:03:51.366 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:03:51.366 EAL: Hugepages will be freed exactly as allocated. 00:03:51.366 EAL: No shared files mode enabled, IPC is disabled 00:03:51.366 EAL: No shared files mode enabled, IPC is disabled 00:03:51.624 EAL: TSC frequency is ~2200000 KHz 00:03:51.624 EAL: Main lcore 0 is ready (tid=7fb0c06d9a40;cpuset=[0]) 00:03:51.624 EAL: Trying to obtain current memory policy. 00:03:51.624 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:51.624 EAL: Restoring previous memory policy: 0 00:03:51.624 EAL: request: mp_malloc_sync 00:03:51.624 EAL: No shared files mode enabled, IPC is disabled 00:03:51.624 EAL: Heap on socket 0 was expanded by 2MB 00:03:51.624 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:03:51.624 EAL: No PCI address specified using 'addr=' in: bus=pci 00:03:51.624 EAL: Mem event callback 'spdk:(nil)' registered 00:03:51.625 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:03:51.625 00:03:51.625 00:03:51.625 CUnit - A unit testing framework for C - Version 2.1-3 00:03:51.625 http://cunit.sourceforge.net/ 00:03:51.625 00:03:51.625 00:03:51.625 Suite: components_suite 00:03:52.192 Test: vtophys_malloc_test ...passed 00:03:52.192 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:03:52.192 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:52.192 EAL: Restoring previous memory policy: 4 00:03:52.192 EAL: Calling mem event callback 'spdk:(nil)' 00:03:52.192 EAL: request: mp_malloc_sync 00:03:52.192 EAL: No shared files mode enabled, IPC is disabled 00:03:52.192 EAL: Heap on socket 0 was expanded by 4MB 00:03:52.192 EAL: Calling mem event callback 'spdk:(nil)' 00:03:52.192 EAL: request: mp_malloc_sync 00:03:52.192 EAL: No shared files mode enabled, IPC is disabled 00:03:52.192 EAL: Heap on socket 0 was shrunk by 4MB 00:03:52.192 EAL: Trying to obtain current memory policy. 00:03:52.192 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:52.192 EAL: Restoring previous memory policy: 4 00:03:52.192 EAL: Calling mem event callback 'spdk:(nil)' 00:03:52.192 EAL: request: mp_malloc_sync 00:03:52.192 EAL: No shared files mode enabled, IPC is disabled 00:03:52.192 EAL: Heap on socket 0 was expanded by 6MB 00:03:52.192 EAL: Calling mem event callback 'spdk:(nil)' 00:03:52.192 EAL: request: mp_malloc_sync 00:03:52.192 EAL: No shared files mode enabled, IPC is disabled 00:03:52.192 EAL: Heap on socket 0 was shrunk by 6MB 00:03:52.192 EAL: Trying to obtain current memory policy. 00:03:52.192 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:52.192 EAL: Restoring previous memory policy: 4 00:03:52.192 EAL: Calling mem event callback 'spdk:(nil)' 00:03:52.192 EAL: request: mp_malloc_sync 00:03:52.192 EAL: No shared files mode enabled, IPC is disabled 00:03:52.192 EAL: Heap on socket 0 was expanded by 10MB 00:03:52.192 EAL: Calling mem event callback 'spdk:(nil)' 00:03:52.192 EAL: request: mp_malloc_sync 00:03:52.192 EAL: No shared files mode enabled, IPC is disabled 00:03:52.192 EAL: Heap on socket 0 was shrunk by 10MB 00:03:52.192 EAL: Trying to obtain current memory policy. 00:03:52.192 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:52.192 EAL: Restoring previous memory policy: 4 00:03:52.192 EAL: Calling mem event callback 'spdk:(nil)' 00:03:52.192 EAL: request: mp_malloc_sync 00:03:52.192 EAL: No shared files mode enabled, IPC is disabled 00:03:52.192 EAL: Heap on socket 0 was expanded by 18MB 00:03:52.192 EAL: Calling mem event callback 'spdk:(nil)' 00:03:52.192 EAL: request: mp_malloc_sync 00:03:52.192 EAL: No shared files mode enabled, IPC is disabled 00:03:52.192 EAL: Heap on socket 0 was shrunk by 18MB 00:03:52.192 EAL: Trying to obtain current memory policy. 00:03:52.192 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:52.192 EAL: Restoring previous memory policy: 4 00:03:52.192 EAL: Calling mem event callback 'spdk:(nil)' 00:03:52.192 EAL: request: mp_malloc_sync 00:03:52.192 EAL: No shared files mode enabled, IPC is disabled 00:03:52.192 EAL: Heap on socket 0 was expanded by 34MB 00:03:52.192 EAL: Calling mem event callback 'spdk:(nil)' 00:03:52.192 EAL: request: mp_malloc_sync 00:03:52.192 EAL: No shared files mode enabled, IPC is disabled 00:03:52.192 EAL: Heap on socket 0 was shrunk by 34MB 00:03:52.192 EAL: Trying to obtain current memory policy. 00:03:52.192 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:52.192 EAL: Restoring previous memory policy: 4 00:03:52.192 EAL: Calling mem event callback 'spdk:(nil)' 00:03:52.192 EAL: request: mp_malloc_sync 00:03:52.192 EAL: No shared files mode enabled, IPC is disabled 00:03:52.192 EAL: Heap on socket 0 was expanded by 66MB 00:03:52.451 EAL: Calling mem event callback 'spdk:(nil)' 00:03:52.451 EAL: request: mp_malloc_sync 00:03:52.451 EAL: No shared files mode enabled, IPC is disabled 00:03:52.451 EAL: Heap on socket 0 was shrunk by 66MB 00:03:52.451 EAL: Trying to obtain current memory policy. 00:03:52.451 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:52.451 EAL: Restoring previous memory policy: 4 00:03:52.451 EAL: Calling mem event callback 'spdk:(nil)' 00:03:52.451 EAL: request: mp_malloc_sync 00:03:52.451 EAL: No shared files mode enabled, IPC is disabled 00:03:52.451 EAL: Heap on socket 0 was expanded by 130MB 00:03:52.710 EAL: Calling mem event callback 'spdk:(nil)' 00:03:52.710 EAL: request: mp_malloc_sync 00:03:52.710 EAL: No shared files mode enabled, IPC is disabled 00:03:52.710 EAL: Heap on socket 0 was shrunk by 130MB 00:03:52.710 EAL: Trying to obtain current memory policy. 00:03:52.710 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:52.969 EAL: Restoring previous memory policy: 4 00:03:52.969 EAL: Calling mem event callback 'spdk:(nil)' 00:03:52.969 EAL: request: mp_malloc_sync 00:03:52.969 EAL: No shared files mode enabled, IPC is disabled 00:03:52.969 EAL: Heap on socket 0 was expanded by 258MB 00:03:53.228 EAL: Calling mem event callback 'spdk:(nil)' 00:03:53.228 EAL: request: mp_malloc_sync 00:03:53.228 EAL: No shared files mode enabled, IPC is disabled 00:03:53.228 EAL: Heap on socket 0 was shrunk by 258MB 00:03:53.486 EAL: Trying to obtain current memory policy. 00:03:53.486 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:53.745 EAL: Restoring previous memory policy: 4 00:03:53.745 EAL: Calling mem event callback 'spdk:(nil)' 00:03:53.745 EAL: request: mp_malloc_sync 00:03:53.745 EAL: No shared files mode enabled, IPC is disabled 00:03:53.745 EAL: Heap on socket 0 was expanded by 514MB 00:03:54.680 EAL: Calling mem event callback 'spdk:(nil)' 00:03:54.680 EAL: request: mp_malloc_sync 00:03:54.680 EAL: No shared files mode enabled, IPC is disabled 00:03:54.680 EAL: Heap on socket 0 was shrunk by 514MB 00:03:55.246 EAL: Trying to obtain current memory policy. 00:03:55.246 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:55.505 EAL: Restoring previous memory policy: 4 00:03:55.505 EAL: Calling mem event callback 'spdk:(nil)' 00:03:55.505 EAL: request: mp_malloc_sync 00:03:55.505 EAL: No shared files mode enabled, IPC is disabled 00:03:55.505 EAL: Heap on socket 0 was expanded by 1026MB 00:03:56.880 EAL: Calling mem event callback 'spdk:(nil)' 00:03:57.138 EAL: request: mp_malloc_sync 00:03:57.138 EAL: No shared files mode enabled, IPC is disabled 00:03:57.138 EAL: Heap on socket 0 was shrunk by 1026MB 00:03:58.513 passed 00:03:58.513 00:03:58.513 Run Summary: Type Total Ran Passed Failed Inactive 00:03:58.513 suites 1 1 n/a 0 0 00:03:58.513 tests 2 2 2 0 0 00:03:58.513 asserts 5306 5306 5306 0 n/a 00:03:58.513 00:03:58.513 Elapsed time = 6.641 seconds 00:03:58.513 EAL: Calling mem event callback 'spdk:(nil)' 00:03:58.513 EAL: request: mp_malloc_sync 00:03:58.513 EAL: No shared files mode enabled, IPC is disabled 00:03:58.513 EAL: Heap on socket 0 was shrunk by 2MB 00:03:58.513 EAL: No shared files mode enabled, IPC is disabled 00:03:58.513 EAL: No shared files mode enabled, IPC is disabled 00:03:58.513 EAL: No shared files mode enabled, IPC is disabled 00:03:58.513 00:03:58.513 real 0m6.962s 00:03:58.513 user 0m5.682s 00:03:58.513 sys 0m1.119s 00:03:58.513 01:59:06 env.env_vtophys -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:58.513 ************************************ 00:03:58.513 END TEST env_vtophys 00:03:58.513 ************************************ 00:03:58.513 01:59:06 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:03:58.513 01:59:07 env -- common/autotest_common.sh@1142 -- # return 0 00:03:58.513 01:59:07 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:03:58.513 01:59:07 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:58.513 01:59:07 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:58.513 01:59:07 env -- common/autotest_common.sh@10 -- # set +x 00:03:58.513 ************************************ 00:03:58.513 START TEST env_pci 00:03:58.513 ************************************ 00:03:58.513 01:59:07 env.env_pci -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:03:58.513 00:03:58.513 00:03:58.513 CUnit - A unit testing framework for C - Version 2.1-3 00:03:58.513 http://cunit.sourceforge.net/ 00:03:58.513 00:03:58.513 00:03:58.513 Suite: pci 00:03:58.513 Test: pci_hook ...[2024-07-23 01:59:07.079743] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 58650 has claimed it 00:03:58.513 passed 00:03:58.513 00:03:58.513 EAL: Cannot find device (10000:00:01.0) 00:03:58.513 EAL: Failed to attach device on primary process 00:03:58.513 Run Summary: Type Total Ran Passed Failed Inactive 00:03:58.513 suites 1 1 n/a 0 0 00:03:58.513 tests 1 1 1 0 0 00:03:58.513 asserts 25 25 25 0 n/a 00:03:58.513 00:03:58.513 Elapsed time = 0.009 seconds 00:03:58.513 00:03:58.513 real 0m0.089s 00:03:58.513 user 0m0.034s 00:03:58.513 sys 0m0.053s 00:03:58.513 ************************************ 00:03:58.513 END TEST env_pci 00:03:58.513 ************************************ 00:03:58.513 01:59:07 env.env_pci -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:58.513 01:59:07 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:03:58.513 01:59:07 env -- common/autotest_common.sh@1142 -- # return 0 00:03:58.513 01:59:07 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:03:58.513 01:59:07 env -- env/env.sh@15 -- # uname 00:03:58.513 01:59:07 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:03:58.513 01:59:07 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:03:58.513 01:59:07 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:58.513 01:59:07 env -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:03:58.513 01:59:07 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:58.513 01:59:07 env -- common/autotest_common.sh@10 -- # set +x 00:03:58.513 ************************************ 00:03:58.513 START TEST env_dpdk_post_init 00:03:58.513 ************************************ 00:03:58.513 01:59:07 env.env_dpdk_post_init -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:58.513 EAL: Detected CPU lcores: 10 00:03:58.513 EAL: Detected NUMA nodes: 1 00:03:58.513 EAL: Detected shared linkage of DPDK 00:03:58.772 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:58.772 EAL: Selected IOVA mode 'PA' 00:03:58.772 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:58.772 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:03:58.772 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:03:58.772 Starting DPDK initialization... 00:03:58.772 Starting SPDK post initialization... 00:03:58.772 SPDK NVMe probe 00:03:58.772 Attaching to 0000:00:10.0 00:03:58.772 Attaching to 0000:00:11.0 00:03:58.772 Attached to 0000:00:10.0 00:03:58.772 Attached to 0000:00:11.0 00:03:58.772 Cleaning up... 00:03:58.772 00:03:58.772 real 0m0.313s 00:03:58.772 user 0m0.110s 00:03:58.772 sys 0m0.098s 00:03:58.772 ************************************ 00:03:58.772 END TEST env_dpdk_post_init 00:03:58.772 ************************************ 00:03:58.772 01:59:07 env.env_dpdk_post_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:58.772 01:59:07 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:03:59.031 01:59:07 env -- common/autotest_common.sh@1142 -- # return 0 00:03:59.031 01:59:07 env -- env/env.sh@26 -- # uname 00:03:59.031 01:59:07 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:03:59.031 01:59:07 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:03:59.031 01:59:07 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:59.031 01:59:07 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:59.031 01:59:07 env -- common/autotest_common.sh@10 -- # set +x 00:03:59.031 ************************************ 00:03:59.031 START TEST env_mem_callbacks 00:03:59.031 ************************************ 00:03:59.031 01:59:07 env.env_mem_callbacks -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:03:59.031 EAL: Detected CPU lcores: 10 00:03:59.031 EAL: Detected NUMA nodes: 1 00:03:59.031 EAL: Detected shared linkage of DPDK 00:03:59.031 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:59.031 EAL: Selected IOVA mode 'PA' 00:03:59.031 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:59.031 00:03:59.031 00:03:59.031 CUnit - A unit testing framework for C - Version 2.1-3 00:03:59.031 http://cunit.sourceforge.net/ 00:03:59.031 00:03:59.031 00:03:59.031 Suite: memory 00:03:59.031 Test: test ... 00:03:59.031 register 0x200000200000 2097152 00:03:59.031 malloc 3145728 00:03:59.031 register 0x200000400000 4194304 00:03:59.031 buf 0x2000004fffc0 len 3145728 PASSED 00:03:59.031 malloc 64 00:03:59.031 buf 0x2000004ffec0 len 64 PASSED 00:03:59.031 malloc 4194304 00:03:59.031 register 0x200000800000 6291456 00:03:59.031 buf 0x2000009fffc0 len 4194304 PASSED 00:03:59.031 free 0x2000004fffc0 3145728 00:03:59.031 free 0x2000004ffec0 64 00:03:59.031 unregister 0x200000400000 4194304 PASSED 00:03:59.031 free 0x2000009fffc0 4194304 00:03:59.031 unregister 0x200000800000 6291456 PASSED 00:03:59.290 malloc 8388608 00:03:59.290 register 0x200000400000 10485760 00:03:59.290 buf 0x2000005fffc0 len 8388608 PASSED 00:03:59.290 free 0x2000005fffc0 8388608 00:03:59.290 unregister 0x200000400000 10485760 PASSED 00:03:59.290 passed 00:03:59.290 00:03:59.290 Run Summary: Type Total Ran Passed Failed Inactive 00:03:59.290 suites 1 1 n/a 0 0 00:03:59.290 tests 1 1 1 0 0 00:03:59.290 asserts 15 15 15 0 n/a 00:03:59.290 00:03:59.290 Elapsed time = 0.070 seconds 00:03:59.290 ************************************ 00:03:59.290 END TEST env_mem_callbacks 00:03:59.290 ************************************ 00:03:59.290 00:03:59.290 real 0m0.285s 00:03:59.290 user 0m0.102s 00:03:59.290 sys 0m0.077s 00:03:59.290 01:59:07 env.env_mem_callbacks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:59.290 01:59:07 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:03:59.290 01:59:07 env -- common/autotest_common.sh@1142 -- # return 0 00:03:59.290 ************************************ 00:03:59.290 END TEST env 00:03:59.290 ************************************ 00:03:59.290 00:03:59.290 real 0m8.438s 00:03:59.290 user 0m6.407s 00:03:59.290 sys 0m1.617s 00:03:59.290 01:59:07 env -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:59.290 01:59:07 env -- common/autotest_common.sh@10 -- # set +x 00:03:59.290 01:59:07 -- common/autotest_common.sh@1142 -- # return 0 00:03:59.290 01:59:07 -- spdk/autotest.sh@169 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:03:59.290 01:59:07 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:59.290 01:59:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:59.290 01:59:07 -- common/autotest_common.sh@10 -- # set +x 00:03:59.290 ************************************ 00:03:59.290 START TEST rpc 00:03:59.290 ************************************ 00:03:59.290 01:59:07 rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:03:59.290 * Looking for test storage... 00:03:59.290 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:03:59.290 01:59:08 rpc -- rpc/rpc.sh@65 -- # spdk_pid=58769 00:03:59.290 01:59:08 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:03:59.290 01:59:08 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:59.290 01:59:08 rpc -- rpc/rpc.sh@67 -- # waitforlisten 58769 00:03:59.290 01:59:08 rpc -- common/autotest_common.sh@829 -- # '[' -z 58769 ']' 00:03:59.290 01:59:08 rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:59.290 01:59:08 rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:03:59.290 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:59.290 01:59:08 rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:59.290 01:59:08 rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:03:59.290 01:59:08 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:59.548 [2024-07-23 01:59:08.259421] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:03:59.548 [2024-07-23 01:59:08.259868] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58769 ] 00:03:59.807 [2024-07-23 01:59:08.441076] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:00.065 [2024-07-23 01:59:08.701350] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:00.065 [2024-07-23 01:59:08.701424] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 58769' to capture a snapshot of events at runtime. 00:04:00.065 [2024-07-23 01:59:08.701443] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:00.065 [2024-07-23 01:59:08.701455] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:00.065 [2024-07-23 01:59:08.701468] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid58769 for offline analysis/debug. 00:04:00.065 [2024-07-23 01:59:08.701535] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:01.001 01:59:09 rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:01.001 01:59:09 rpc -- common/autotest_common.sh@862 -- # return 0 00:04:01.001 01:59:09 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:01.001 01:59:09 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:01.001 01:59:09 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:01.001 01:59:09 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:01.001 01:59:09 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:01.001 01:59:09 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:01.001 01:59:09 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:01.001 ************************************ 00:04:01.001 START TEST rpc_integrity 00:04:01.001 ************************************ 00:04:01.001 01:59:09 rpc.rpc_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:04:01.001 01:59:09 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:01.001 01:59:09 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:01.001 01:59:09 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:01.001 01:59:09 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:01.001 01:59:09 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:01.001 01:59:09 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:01.001 01:59:09 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:01.001 01:59:09 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:01.001 01:59:09 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:01.001 01:59:09 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:01.001 01:59:09 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:01.001 01:59:09 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:01.001 01:59:09 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:01.001 01:59:09 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:01.001 01:59:09 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:01.001 01:59:09 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:01.001 01:59:09 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:01.001 { 00:04:01.001 "name": "Malloc0", 00:04:01.001 "aliases": [ 00:04:01.001 "68272ab7-6c59-4648-b5df-4eebbfba2cc5" 00:04:01.001 ], 00:04:01.001 "product_name": "Malloc disk", 00:04:01.001 "block_size": 512, 00:04:01.001 "num_blocks": 16384, 00:04:01.001 "uuid": "68272ab7-6c59-4648-b5df-4eebbfba2cc5", 00:04:01.001 "assigned_rate_limits": { 00:04:01.001 "rw_ios_per_sec": 0, 00:04:01.001 "rw_mbytes_per_sec": 0, 00:04:01.001 "r_mbytes_per_sec": 0, 00:04:01.001 "w_mbytes_per_sec": 0 00:04:01.001 }, 00:04:01.001 "claimed": false, 00:04:01.001 "zoned": false, 00:04:01.001 "supported_io_types": { 00:04:01.001 "read": true, 00:04:01.001 "write": true, 00:04:01.001 "unmap": true, 00:04:01.001 "flush": true, 00:04:01.001 "reset": true, 00:04:01.001 "nvme_admin": false, 00:04:01.001 "nvme_io": false, 00:04:01.001 "nvme_io_md": false, 00:04:01.001 "write_zeroes": true, 00:04:01.001 "zcopy": true, 00:04:01.001 "get_zone_info": false, 00:04:01.001 "zone_management": false, 00:04:01.001 "zone_append": false, 00:04:01.001 "compare": false, 00:04:01.001 "compare_and_write": false, 00:04:01.001 "abort": true, 00:04:01.001 "seek_hole": false, 00:04:01.001 "seek_data": false, 00:04:01.001 "copy": true, 00:04:01.001 "nvme_iov_md": false 00:04:01.001 }, 00:04:01.001 "memory_domains": [ 00:04:01.001 { 00:04:01.001 "dma_device_id": "system", 00:04:01.001 "dma_device_type": 1 00:04:01.001 }, 00:04:01.001 { 00:04:01.001 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:01.002 "dma_device_type": 2 00:04:01.002 } 00:04:01.002 ], 00:04:01.002 "driver_specific": {} 00:04:01.002 } 00:04:01.002 ]' 00:04:01.002 01:59:09 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:01.002 01:59:09 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:01.002 01:59:09 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:01.002 01:59:09 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:01.002 01:59:09 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:01.002 [2024-07-23 01:59:09.611907] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:01.002 [2024-07-23 01:59:09.611971] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:01.002 [2024-07-23 01:59:09.612004] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:04:01.002 [2024-07-23 01:59:09.612018] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:01.002 [2024-07-23 01:59:09.614648] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:01.002 [2024-07-23 01:59:09.614688] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:01.002 Passthru0 00:04:01.002 01:59:09 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:01.002 01:59:09 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:01.002 01:59:09 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:01.002 01:59:09 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:01.002 01:59:09 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:01.002 01:59:09 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:01.002 { 00:04:01.002 "name": "Malloc0", 00:04:01.002 "aliases": [ 00:04:01.002 "68272ab7-6c59-4648-b5df-4eebbfba2cc5" 00:04:01.002 ], 00:04:01.002 "product_name": "Malloc disk", 00:04:01.002 "block_size": 512, 00:04:01.002 "num_blocks": 16384, 00:04:01.002 "uuid": "68272ab7-6c59-4648-b5df-4eebbfba2cc5", 00:04:01.002 "assigned_rate_limits": { 00:04:01.002 "rw_ios_per_sec": 0, 00:04:01.002 "rw_mbytes_per_sec": 0, 00:04:01.002 "r_mbytes_per_sec": 0, 00:04:01.002 "w_mbytes_per_sec": 0 00:04:01.002 }, 00:04:01.002 "claimed": true, 00:04:01.002 "claim_type": "exclusive_write", 00:04:01.002 "zoned": false, 00:04:01.002 "supported_io_types": { 00:04:01.002 "read": true, 00:04:01.002 "write": true, 00:04:01.002 "unmap": true, 00:04:01.002 "flush": true, 00:04:01.002 "reset": true, 00:04:01.002 "nvme_admin": false, 00:04:01.002 "nvme_io": false, 00:04:01.002 "nvme_io_md": false, 00:04:01.002 "write_zeroes": true, 00:04:01.002 "zcopy": true, 00:04:01.002 "get_zone_info": false, 00:04:01.002 "zone_management": false, 00:04:01.002 "zone_append": false, 00:04:01.002 "compare": false, 00:04:01.002 "compare_and_write": false, 00:04:01.002 "abort": true, 00:04:01.002 "seek_hole": false, 00:04:01.002 "seek_data": false, 00:04:01.002 "copy": true, 00:04:01.002 "nvme_iov_md": false 00:04:01.002 }, 00:04:01.002 "memory_domains": [ 00:04:01.002 { 00:04:01.002 "dma_device_id": "system", 00:04:01.002 "dma_device_type": 1 00:04:01.002 }, 00:04:01.002 { 00:04:01.002 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:01.002 "dma_device_type": 2 00:04:01.002 } 00:04:01.002 ], 00:04:01.002 "driver_specific": {} 00:04:01.002 }, 00:04:01.002 { 00:04:01.002 "name": "Passthru0", 00:04:01.002 "aliases": [ 00:04:01.002 "541c4927-e97a-52e3-8d0f-2942e0f2d0fa" 00:04:01.002 ], 00:04:01.002 "product_name": "passthru", 00:04:01.002 "block_size": 512, 00:04:01.002 "num_blocks": 16384, 00:04:01.002 "uuid": "541c4927-e97a-52e3-8d0f-2942e0f2d0fa", 00:04:01.002 "assigned_rate_limits": { 00:04:01.002 "rw_ios_per_sec": 0, 00:04:01.002 "rw_mbytes_per_sec": 0, 00:04:01.002 "r_mbytes_per_sec": 0, 00:04:01.002 "w_mbytes_per_sec": 0 00:04:01.002 }, 00:04:01.002 "claimed": false, 00:04:01.002 "zoned": false, 00:04:01.002 "supported_io_types": { 00:04:01.002 "read": true, 00:04:01.002 "write": true, 00:04:01.002 "unmap": true, 00:04:01.002 "flush": true, 00:04:01.002 "reset": true, 00:04:01.002 "nvme_admin": false, 00:04:01.002 "nvme_io": false, 00:04:01.002 "nvme_io_md": false, 00:04:01.002 "write_zeroes": true, 00:04:01.002 "zcopy": true, 00:04:01.002 "get_zone_info": false, 00:04:01.002 "zone_management": false, 00:04:01.002 "zone_append": false, 00:04:01.002 "compare": false, 00:04:01.002 "compare_and_write": false, 00:04:01.002 "abort": true, 00:04:01.002 "seek_hole": false, 00:04:01.002 "seek_data": false, 00:04:01.002 "copy": true, 00:04:01.002 "nvme_iov_md": false 00:04:01.002 }, 00:04:01.002 "memory_domains": [ 00:04:01.002 { 00:04:01.002 "dma_device_id": "system", 00:04:01.002 "dma_device_type": 1 00:04:01.002 }, 00:04:01.002 { 00:04:01.002 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:01.002 "dma_device_type": 2 00:04:01.002 } 00:04:01.002 ], 00:04:01.002 "driver_specific": { 00:04:01.002 "passthru": { 00:04:01.002 "name": "Passthru0", 00:04:01.002 "base_bdev_name": "Malloc0" 00:04:01.002 } 00:04:01.002 } 00:04:01.002 } 00:04:01.002 ]' 00:04:01.002 01:59:09 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:01.002 01:59:09 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:01.002 01:59:09 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:01.002 01:59:09 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:01.002 01:59:09 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:01.002 01:59:09 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:01.002 01:59:09 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:01.002 01:59:09 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:01.002 01:59:09 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:01.002 01:59:09 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:01.002 01:59:09 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:01.002 01:59:09 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:01.002 01:59:09 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:01.002 01:59:09 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:01.002 01:59:09 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:01.002 01:59:09 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:01.261 ************************************ 00:04:01.261 END TEST rpc_integrity 00:04:01.261 ************************************ 00:04:01.261 01:59:09 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:01.261 00:04:01.261 real 0m0.354s 00:04:01.261 user 0m0.225s 00:04:01.261 sys 0m0.042s 00:04:01.261 01:59:09 rpc.rpc_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:01.261 01:59:09 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:01.261 01:59:09 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:01.261 01:59:09 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:01.261 01:59:09 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:01.261 01:59:09 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:01.261 01:59:09 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:01.261 ************************************ 00:04:01.261 START TEST rpc_plugins 00:04:01.261 ************************************ 00:04:01.261 01:59:09 rpc.rpc_plugins -- common/autotest_common.sh@1123 -- # rpc_plugins 00:04:01.261 01:59:09 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:01.261 01:59:09 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:01.261 01:59:09 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:01.261 01:59:09 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:01.261 01:59:09 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:01.261 01:59:09 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:01.261 01:59:09 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:01.261 01:59:09 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:01.261 01:59:09 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:01.261 01:59:09 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:01.261 { 00:04:01.261 "name": "Malloc1", 00:04:01.261 "aliases": [ 00:04:01.261 "60a81faf-0ae7-4ead-b463-7169ffe8c835" 00:04:01.261 ], 00:04:01.261 "product_name": "Malloc disk", 00:04:01.261 "block_size": 4096, 00:04:01.261 "num_blocks": 256, 00:04:01.261 "uuid": "60a81faf-0ae7-4ead-b463-7169ffe8c835", 00:04:01.261 "assigned_rate_limits": { 00:04:01.261 "rw_ios_per_sec": 0, 00:04:01.261 "rw_mbytes_per_sec": 0, 00:04:01.261 "r_mbytes_per_sec": 0, 00:04:01.261 "w_mbytes_per_sec": 0 00:04:01.261 }, 00:04:01.261 "claimed": false, 00:04:01.261 "zoned": false, 00:04:01.261 "supported_io_types": { 00:04:01.261 "read": true, 00:04:01.261 "write": true, 00:04:01.261 "unmap": true, 00:04:01.261 "flush": true, 00:04:01.261 "reset": true, 00:04:01.261 "nvme_admin": false, 00:04:01.261 "nvme_io": false, 00:04:01.261 "nvme_io_md": false, 00:04:01.261 "write_zeroes": true, 00:04:01.261 "zcopy": true, 00:04:01.261 "get_zone_info": false, 00:04:01.261 "zone_management": false, 00:04:01.262 "zone_append": false, 00:04:01.262 "compare": false, 00:04:01.262 "compare_and_write": false, 00:04:01.262 "abort": true, 00:04:01.262 "seek_hole": false, 00:04:01.262 "seek_data": false, 00:04:01.262 "copy": true, 00:04:01.262 "nvme_iov_md": false 00:04:01.262 }, 00:04:01.262 "memory_domains": [ 00:04:01.262 { 00:04:01.262 "dma_device_id": "system", 00:04:01.262 "dma_device_type": 1 00:04:01.262 }, 00:04:01.262 { 00:04:01.262 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:01.262 "dma_device_type": 2 00:04:01.262 } 00:04:01.262 ], 00:04:01.262 "driver_specific": {} 00:04:01.262 } 00:04:01.262 ]' 00:04:01.262 01:59:09 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:01.262 01:59:09 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:01.262 01:59:09 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:01.262 01:59:09 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:01.262 01:59:09 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:01.262 01:59:09 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:01.262 01:59:09 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:01.262 01:59:09 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:01.262 01:59:09 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:01.262 01:59:09 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:01.262 01:59:09 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:01.262 01:59:09 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:01.262 ************************************ 00:04:01.262 END TEST rpc_plugins 00:04:01.262 ************************************ 00:04:01.262 01:59:10 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:01.262 00:04:01.262 real 0m0.161s 00:04:01.262 user 0m0.105s 00:04:01.262 sys 0m0.020s 00:04:01.262 01:59:10 rpc.rpc_plugins -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:01.262 01:59:10 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:01.521 01:59:10 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:01.521 01:59:10 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:01.521 01:59:10 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:01.521 01:59:10 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:01.521 01:59:10 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:01.521 ************************************ 00:04:01.521 START TEST rpc_trace_cmd_test 00:04:01.521 ************************************ 00:04:01.521 01:59:10 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1123 -- # rpc_trace_cmd_test 00:04:01.521 01:59:10 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:01.521 01:59:10 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:01.521 01:59:10 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:01.521 01:59:10 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:01.521 01:59:10 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:01.521 01:59:10 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:01.521 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid58769", 00:04:01.521 "tpoint_group_mask": "0x8", 00:04:01.521 "iscsi_conn": { 00:04:01.521 "mask": "0x2", 00:04:01.521 "tpoint_mask": "0x0" 00:04:01.521 }, 00:04:01.521 "scsi": { 00:04:01.521 "mask": "0x4", 00:04:01.521 "tpoint_mask": "0x0" 00:04:01.521 }, 00:04:01.521 "bdev": { 00:04:01.521 "mask": "0x8", 00:04:01.521 "tpoint_mask": "0xffffffffffffffff" 00:04:01.521 }, 00:04:01.521 "nvmf_rdma": { 00:04:01.521 "mask": "0x10", 00:04:01.521 "tpoint_mask": "0x0" 00:04:01.521 }, 00:04:01.521 "nvmf_tcp": { 00:04:01.521 "mask": "0x20", 00:04:01.521 "tpoint_mask": "0x0" 00:04:01.521 }, 00:04:01.521 "ftl": { 00:04:01.521 "mask": "0x40", 00:04:01.521 "tpoint_mask": "0x0" 00:04:01.521 }, 00:04:01.521 "blobfs": { 00:04:01.521 "mask": "0x80", 00:04:01.521 "tpoint_mask": "0x0" 00:04:01.521 }, 00:04:01.521 "dsa": { 00:04:01.521 "mask": "0x200", 00:04:01.521 "tpoint_mask": "0x0" 00:04:01.521 }, 00:04:01.521 "thread": { 00:04:01.521 "mask": "0x400", 00:04:01.521 "tpoint_mask": "0x0" 00:04:01.521 }, 00:04:01.521 "nvme_pcie": { 00:04:01.521 "mask": "0x800", 00:04:01.521 "tpoint_mask": "0x0" 00:04:01.521 }, 00:04:01.521 "iaa": { 00:04:01.521 "mask": "0x1000", 00:04:01.521 "tpoint_mask": "0x0" 00:04:01.521 }, 00:04:01.521 "nvme_tcp": { 00:04:01.521 "mask": "0x2000", 00:04:01.521 "tpoint_mask": "0x0" 00:04:01.521 }, 00:04:01.521 "bdev_nvme": { 00:04:01.521 "mask": "0x4000", 00:04:01.521 "tpoint_mask": "0x0" 00:04:01.521 }, 00:04:01.521 "sock": { 00:04:01.521 "mask": "0x8000", 00:04:01.521 "tpoint_mask": "0x0" 00:04:01.521 } 00:04:01.521 }' 00:04:01.521 01:59:10 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:01.521 01:59:10 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:04:01.521 01:59:10 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:01.521 01:59:10 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:01.521 01:59:10 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:01.521 01:59:10 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:01.521 01:59:10 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:01.780 01:59:10 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:01.780 01:59:10 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:01.780 ************************************ 00:04:01.780 END TEST rpc_trace_cmd_test 00:04:01.780 ************************************ 00:04:01.780 01:59:10 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:01.780 00:04:01.780 real 0m0.286s 00:04:01.780 user 0m0.243s 00:04:01.780 sys 0m0.031s 00:04:01.780 01:59:10 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:01.780 01:59:10 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:01.780 01:59:10 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:01.780 01:59:10 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:01.780 01:59:10 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:01.780 01:59:10 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:01.780 01:59:10 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:01.780 01:59:10 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:01.780 01:59:10 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:01.780 ************************************ 00:04:01.780 START TEST rpc_daemon_integrity 00:04:01.780 ************************************ 00:04:01.780 01:59:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:04:01.780 01:59:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:01.780 01:59:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:01.780 01:59:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:01.780 01:59:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:01.780 01:59:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:01.780 01:59:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:01.780 01:59:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:01.780 01:59:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:01.780 01:59:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:01.780 01:59:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:01.780 01:59:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:01.780 01:59:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:01.780 01:59:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:01.780 01:59:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:01.780 01:59:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:01.780 01:59:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:01.780 01:59:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:01.780 { 00:04:01.780 "name": "Malloc2", 00:04:01.780 "aliases": [ 00:04:01.780 "df769d28-15f0-42de-81f5-53e408f960f4" 00:04:01.780 ], 00:04:01.780 "product_name": "Malloc disk", 00:04:01.780 "block_size": 512, 00:04:01.780 "num_blocks": 16384, 00:04:01.780 "uuid": "df769d28-15f0-42de-81f5-53e408f960f4", 00:04:01.780 "assigned_rate_limits": { 00:04:01.780 "rw_ios_per_sec": 0, 00:04:01.780 "rw_mbytes_per_sec": 0, 00:04:01.780 "r_mbytes_per_sec": 0, 00:04:01.780 "w_mbytes_per_sec": 0 00:04:01.780 }, 00:04:01.780 "claimed": false, 00:04:01.780 "zoned": false, 00:04:01.780 "supported_io_types": { 00:04:01.780 "read": true, 00:04:01.780 "write": true, 00:04:01.780 "unmap": true, 00:04:01.780 "flush": true, 00:04:01.780 "reset": true, 00:04:01.780 "nvme_admin": false, 00:04:01.780 "nvme_io": false, 00:04:01.780 "nvme_io_md": false, 00:04:01.780 "write_zeroes": true, 00:04:01.780 "zcopy": true, 00:04:01.780 "get_zone_info": false, 00:04:01.780 "zone_management": false, 00:04:01.780 "zone_append": false, 00:04:01.780 "compare": false, 00:04:01.780 "compare_and_write": false, 00:04:01.780 "abort": true, 00:04:01.780 "seek_hole": false, 00:04:01.780 "seek_data": false, 00:04:01.781 "copy": true, 00:04:01.781 "nvme_iov_md": false 00:04:01.781 }, 00:04:01.781 "memory_domains": [ 00:04:01.781 { 00:04:01.781 "dma_device_id": "system", 00:04:01.781 "dma_device_type": 1 00:04:01.781 }, 00:04:01.781 { 00:04:01.781 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:01.781 "dma_device_type": 2 00:04:01.781 } 00:04:01.781 ], 00:04:01.781 "driver_specific": {} 00:04:01.781 } 00:04:01.781 ]' 00:04:01.781 01:59:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:02.040 01:59:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:02.040 01:59:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:02.040 01:59:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:02.040 01:59:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:02.040 [2024-07-23 01:59:10.563691] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:02.040 [2024-07-23 01:59:10.563745] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:02.040 [2024-07-23 01:59:10.563775] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:04:02.040 [2024-07-23 01:59:10.563790] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:02.040 [2024-07-23 01:59:10.566505] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:02.040 [2024-07-23 01:59:10.566622] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:02.040 Passthru0 00:04:02.040 01:59:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:02.040 01:59:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:02.040 01:59:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:02.040 01:59:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:02.040 01:59:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:02.040 01:59:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:02.040 { 00:04:02.040 "name": "Malloc2", 00:04:02.040 "aliases": [ 00:04:02.040 "df769d28-15f0-42de-81f5-53e408f960f4" 00:04:02.040 ], 00:04:02.040 "product_name": "Malloc disk", 00:04:02.040 "block_size": 512, 00:04:02.040 "num_blocks": 16384, 00:04:02.040 "uuid": "df769d28-15f0-42de-81f5-53e408f960f4", 00:04:02.040 "assigned_rate_limits": { 00:04:02.040 "rw_ios_per_sec": 0, 00:04:02.040 "rw_mbytes_per_sec": 0, 00:04:02.040 "r_mbytes_per_sec": 0, 00:04:02.040 "w_mbytes_per_sec": 0 00:04:02.040 }, 00:04:02.040 "claimed": true, 00:04:02.040 "claim_type": "exclusive_write", 00:04:02.040 "zoned": false, 00:04:02.040 "supported_io_types": { 00:04:02.040 "read": true, 00:04:02.040 "write": true, 00:04:02.040 "unmap": true, 00:04:02.040 "flush": true, 00:04:02.040 "reset": true, 00:04:02.040 "nvme_admin": false, 00:04:02.040 "nvme_io": false, 00:04:02.040 "nvme_io_md": false, 00:04:02.040 "write_zeroes": true, 00:04:02.040 "zcopy": true, 00:04:02.040 "get_zone_info": false, 00:04:02.040 "zone_management": false, 00:04:02.040 "zone_append": false, 00:04:02.040 "compare": false, 00:04:02.040 "compare_and_write": false, 00:04:02.040 "abort": true, 00:04:02.040 "seek_hole": false, 00:04:02.040 "seek_data": false, 00:04:02.040 "copy": true, 00:04:02.040 "nvme_iov_md": false 00:04:02.040 }, 00:04:02.040 "memory_domains": [ 00:04:02.040 { 00:04:02.040 "dma_device_id": "system", 00:04:02.040 "dma_device_type": 1 00:04:02.040 }, 00:04:02.040 { 00:04:02.040 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:02.040 "dma_device_type": 2 00:04:02.040 } 00:04:02.040 ], 00:04:02.040 "driver_specific": {} 00:04:02.040 }, 00:04:02.040 { 00:04:02.040 "name": "Passthru0", 00:04:02.040 "aliases": [ 00:04:02.040 "7dc1a385-d06c-5061-aef7-f65d14b8667a" 00:04:02.040 ], 00:04:02.040 "product_name": "passthru", 00:04:02.040 "block_size": 512, 00:04:02.040 "num_blocks": 16384, 00:04:02.040 "uuid": "7dc1a385-d06c-5061-aef7-f65d14b8667a", 00:04:02.040 "assigned_rate_limits": { 00:04:02.040 "rw_ios_per_sec": 0, 00:04:02.040 "rw_mbytes_per_sec": 0, 00:04:02.040 "r_mbytes_per_sec": 0, 00:04:02.040 "w_mbytes_per_sec": 0 00:04:02.040 }, 00:04:02.040 "claimed": false, 00:04:02.040 "zoned": false, 00:04:02.040 "supported_io_types": { 00:04:02.040 "read": true, 00:04:02.040 "write": true, 00:04:02.040 "unmap": true, 00:04:02.040 "flush": true, 00:04:02.040 "reset": true, 00:04:02.040 "nvme_admin": false, 00:04:02.040 "nvme_io": false, 00:04:02.040 "nvme_io_md": false, 00:04:02.040 "write_zeroes": true, 00:04:02.040 "zcopy": true, 00:04:02.040 "get_zone_info": false, 00:04:02.040 "zone_management": false, 00:04:02.040 "zone_append": false, 00:04:02.040 "compare": false, 00:04:02.040 "compare_and_write": false, 00:04:02.040 "abort": true, 00:04:02.040 "seek_hole": false, 00:04:02.040 "seek_data": false, 00:04:02.040 "copy": true, 00:04:02.040 "nvme_iov_md": false 00:04:02.040 }, 00:04:02.040 "memory_domains": [ 00:04:02.040 { 00:04:02.040 "dma_device_id": "system", 00:04:02.040 "dma_device_type": 1 00:04:02.040 }, 00:04:02.040 { 00:04:02.040 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:02.040 "dma_device_type": 2 00:04:02.040 } 00:04:02.040 ], 00:04:02.040 "driver_specific": { 00:04:02.040 "passthru": { 00:04:02.040 "name": "Passthru0", 00:04:02.040 "base_bdev_name": "Malloc2" 00:04:02.040 } 00:04:02.040 } 00:04:02.040 } 00:04:02.040 ]' 00:04:02.041 01:59:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:02.041 01:59:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:02.041 01:59:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:02.041 01:59:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:02.041 01:59:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:02.041 01:59:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:02.041 01:59:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:02.041 01:59:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:02.041 01:59:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:02.041 01:59:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:02.041 01:59:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:02.041 01:59:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:02.041 01:59:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:02.041 01:59:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:02.041 01:59:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:02.041 01:59:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:02.041 ************************************ 00:04:02.041 END TEST rpc_daemon_integrity 00:04:02.041 ************************************ 00:04:02.041 01:59:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:02.041 00:04:02.041 real 0m0.344s 00:04:02.041 user 0m0.216s 00:04:02.041 sys 0m0.044s 00:04:02.041 01:59:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:02.041 01:59:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:02.041 01:59:10 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:02.041 01:59:10 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:02.041 01:59:10 rpc -- rpc/rpc.sh@84 -- # killprocess 58769 00:04:02.041 01:59:10 rpc -- common/autotest_common.sh@948 -- # '[' -z 58769 ']' 00:04:02.041 01:59:10 rpc -- common/autotest_common.sh@952 -- # kill -0 58769 00:04:02.041 01:59:10 rpc -- common/autotest_common.sh@953 -- # uname 00:04:02.041 01:59:10 rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:02.041 01:59:10 rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 58769 00:04:02.041 killing process with pid 58769 00:04:02.041 01:59:10 rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:02.041 01:59:10 rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:02.041 01:59:10 rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 58769' 00:04:02.041 01:59:10 rpc -- common/autotest_common.sh@967 -- # kill 58769 00:04:02.041 01:59:10 rpc -- common/autotest_common.sh@972 -- # wait 58769 00:04:04.574 ************************************ 00:04:04.574 END TEST rpc 00:04:04.574 ************************************ 00:04:04.574 00:04:04.574 real 0m4.824s 00:04:04.574 user 0m5.390s 00:04:04.574 sys 0m0.948s 00:04:04.574 01:59:12 rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:04.574 01:59:12 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:04.574 01:59:12 -- common/autotest_common.sh@1142 -- # return 0 00:04:04.574 01:59:12 -- spdk/autotest.sh@170 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:04.574 01:59:12 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:04.574 01:59:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:04.574 01:59:12 -- common/autotest_common.sh@10 -- # set +x 00:04:04.574 ************************************ 00:04:04.574 START TEST skip_rpc 00:04:04.574 ************************************ 00:04:04.574 01:59:12 skip_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:04.574 * Looking for test storage... 00:04:04.574 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:04.574 01:59:12 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:04.574 01:59:12 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:04.574 01:59:12 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:04.574 01:59:12 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:04.574 01:59:12 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:04.574 01:59:12 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:04.574 ************************************ 00:04:04.574 START TEST skip_rpc 00:04:04.574 ************************************ 00:04:04.574 01:59:12 skip_rpc.skip_rpc -- common/autotest_common.sh@1123 -- # test_skip_rpc 00:04:04.574 01:59:12 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=58992 00:04:04.574 01:59:12 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:04.574 01:59:12 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:04.574 01:59:12 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:04.574 [2024-07-23 01:59:13.112324] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:04:04.574 [2024-07-23 01:59:13.112559] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58992 ] 00:04:04.574 [2024-07-23 01:59:13.286453] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:04.832 [2024-07-23 01:59:13.527052] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:10.126 01:59:17 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:10.126 01:59:17 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:04:10.126 01:59:17 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:10.126 01:59:17 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:04:10.126 01:59:17 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:10.126 01:59:17 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:04:10.126 01:59:17 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:10.126 01:59:17 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:04:10.126 01:59:17 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:10.126 01:59:17 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:10.126 01:59:17 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:04:10.126 01:59:17 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:04:10.126 01:59:17 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:10.126 01:59:17 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:04:10.126 01:59:17 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:10.126 01:59:17 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:10.126 01:59:17 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 58992 00:04:10.126 01:59:17 skip_rpc.skip_rpc -- common/autotest_common.sh@948 -- # '[' -z 58992 ']' 00:04:10.126 01:59:17 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # kill -0 58992 00:04:10.126 01:59:17 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # uname 00:04:10.126 01:59:17 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:10.126 01:59:17 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 58992 00:04:10.126 01:59:17 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:10.126 01:59:17 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:10.126 01:59:17 skip_rpc.skip_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 58992' 00:04:10.126 killing process with pid 58992 00:04:10.126 01:59:17 skip_rpc.skip_rpc -- common/autotest_common.sh@967 -- # kill 58992 00:04:10.126 01:59:17 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # wait 58992 00:04:11.503 00:04:11.503 real 0m7.020s 00:04:11.503 user 0m6.395s 00:04:11.503 sys 0m0.522s 00:04:11.503 01:59:19 skip_rpc.skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:11.503 ************************************ 00:04:11.503 END TEST skip_rpc 00:04:11.503 01:59:19 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:11.503 ************************************ 00:04:11.503 01:59:20 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:11.503 01:59:20 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:11.503 01:59:20 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:11.503 01:59:20 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:11.503 01:59:20 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:11.503 ************************************ 00:04:11.503 START TEST skip_rpc_with_json 00:04:11.503 ************************************ 00:04:11.503 01:59:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_json 00:04:11.503 01:59:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:11.503 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:11.503 01:59:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=59096 00:04:11.503 01:59:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:11.503 01:59:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 59096 00:04:11.503 01:59:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:11.503 01:59:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@829 -- # '[' -z 59096 ']' 00:04:11.503 01:59:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:11.503 01:59:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:11.503 01:59:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:11.503 01:59:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:11.503 01:59:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:11.503 [2024-07-23 01:59:20.144993] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:04:11.503 [2024-07-23 01:59:20.145186] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59096 ] 00:04:11.762 [2024-07-23 01:59:20.296127] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:11.762 [2024-07-23 01:59:20.507727] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:12.699 01:59:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:12.699 01:59:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # return 0 00:04:12.699 01:59:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:12.699 01:59:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:12.699 01:59:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:12.699 [2024-07-23 01:59:21.247636] nvmf_rpc.c:2569:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:12.699 request: 00:04:12.699 { 00:04:12.699 "trtype": "tcp", 00:04:12.699 "method": "nvmf_get_transports", 00:04:12.699 "req_id": 1 00:04:12.699 } 00:04:12.699 Got JSON-RPC error response 00:04:12.699 response: 00:04:12.699 { 00:04:12.699 "code": -19, 00:04:12.699 "message": "No such device" 00:04:12.699 } 00:04:12.699 01:59:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:04:12.699 01:59:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:12.699 01:59:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:12.699 01:59:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:12.699 [2024-07-23 01:59:21.259741] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:12.699 01:59:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:12.699 01:59:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:12.699 01:59:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:12.699 01:59:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:12.699 01:59:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:12.699 01:59:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:12.699 { 00:04:12.699 "subsystems": [ 00:04:12.699 { 00:04:12.699 "subsystem": "keyring", 00:04:12.699 "config": [] 00:04:12.699 }, 00:04:12.699 { 00:04:12.699 "subsystem": "iobuf", 00:04:12.699 "config": [ 00:04:12.699 { 00:04:12.699 "method": "iobuf_set_options", 00:04:12.699 "params": { 00:04:12.699 "small_pool_count": 8192, 00:04:12.699 "large_pool_count": 1024, 00:04:12.699 "small_bufsize": 8192, 00:04:12.699 "large_bufsize": 135168 00:04:12.699 } 00:04:12.699 } 00:04:12.699 ] 00:04:12.699 }, 00:04:12.699 { 00:04:12.699 "subsystem": "sock", 00:04:12.699 "config": [ 00:04:12.699 { 00:04:12.699 "method": "sock_set_default_impl", 00:04:12.699 "params": { 00:04:12.699 "impl_name": "posix" 00:04:12.699 } 00:04:12.699 }, 00:04:12.699 { 00:04:12.699 "method": "sock_impl_set_options", 00:04:12.699 "params": { 00:04:12.699 "impl_name": "ssl", 00:04:12.699 "recv_buf_size": 4096, 00:04:12.699 "send_buf_size": 4096, 00:04:12.699 "enable_recv_pipe": true, 00:04:12.699 "enable_quickack": false, 00:04:12.699 "enable_placement_id": 0, 00:04:12.699 "enable_zerocopy_send_server": true, 00:04:12.699 "enable_zerocopy_send_client": false, 00:04:12.699 "zerocopy_threshold": 0, 00:04:12.699 "tls_version": 0, 00:04:12.699 "enable_ktls": false 00:04:12.699 } 00:04:12.699 }, 00:04:12.699 { 00:04:12.699 "method": "sock_impl_set_options", 00:04:12.699 "params": { 00:04:12.699 "impl_name": "posix", 00:04:12.699 "recv_buf_size": 2097152, 00:04:12.699 "send_buf_size": 2097152, 00:04:12.699 "enable_recv_pipe": true, 00:04:12.699 "enable_quickack": false, 00:04:12.699 "enable_placement_id": 0, 00:04:12.699 "enable_zerocopy_send_server": true, 00:04:12.699 "enable_zerocopy_send_client": false, 00:04:12.699 "zerocopy_threshold": 0, 00:04:12.699 "tls_version": 0, 00:04:12.699 "enable_ktls": false 00:04:12.699 } 00:04:12.699 } 00:04:12.699 ] 00:04:12.699 }, 00:04:12.699 { 00:04:12.699 "subsystem": "vmd", 00:04:12.699 "config": [] 00:04:12.699 }, 00:04:12.699 { 00:04:12.699 "subsystem": "accel", 00:04:12.699 "config": [ 00:04:12.699 { 00:04:12.699 "method": "accel_set_options", 00:04:12.699 "params": { 00:04:12.699 "small_cache_size": 128, 00:04:12.699 "large_cache_size": 16, 00:04:12.699 "task_count": 2048, 00:04:12.699 "sequence_count": 2048, 00:04:12.699 "buf_count": 2048 00:04:12.699 } 00:04:12.699 } 00:04:12.699 ] 00:04:12.699 }, 00:04:12.699 { 00:04:12.699 "subsystem": "bdev", 00:04:12.699 "config": [ 00:04:12.699 { 00:04:12.699 "method": "bdev_set_options", 00:04:12.699 "params": { 00:04:12.699 "bdev_io_pool_size": 65535, 00:04:12.699 "bdev_io_cache_size": 256, 00:04:12.699 "bdev_auto_examine": true, 00:04:12.699 "iobuf_small_cache_size": 128, 00:04:12.699 "iobuf_large_cache_size": 16 00:04:12.699 } 00:04:12.699 }, 00:04:12.699 { 00:04:12.699 "method": "bdev_raid_set_options", 00:04:12.699 "params": { 00:04:12.699 "process_window_size_kb": 1024, 00:04:12.699 "process_max_bandwidth_mb_sec": 0 00:04:12.699 } 00:04:12.699 }, 00:04:12.699 { 00:04:12.699 "method": "bdev_iscsi_set_options", 00:04:12.699 "params": { 00:04:12.699 "timeout_sec": 30 00:04:12.699 } 00:04:12.699 }, 00:04:12.699 { 00:04:12.699 "method": "bdev_nvme_set_options", 00:04:12.699 "params": { 00:04:12.699 "action_on_timeout": "none", 00:04:12.699 "timeout_us": 0, 00:04:12.699 "timeout_admin_us": 0, 00:04:12.699 "keep_alive_timeout_ms": 10000, 00:04:12.699 "arbitration_burst": 0, 00:04:12.699 "low_priority_weight": 0, 00:04:12.699 "medium_priority_weight": 0, 00:04:12.699 "high_priority_weight": 0, 00:04:12.699 "nvme_adminq_poll_period_us": 10000, 00:04:12.699 "nvme_ioq_poll_period_us": 0, 00:04:12.699 "io_queue_requests": 0, 00:04:12.699 "delay_cmd_submit": true, 00:04:12.699 "transport_retry_count": 4, 00:04:12.699 "bdev_retry_count": 3, 00:04:12.699 "transport_ack_timeout": 0, 00:04:12.699 "ctrlr_loss_timeout_sec": 0, 00:04:12.699 "reconnect_delay_sec": 0, 00:04:12.699 "fast_io_fail_timeout_sec": 0, 00:04:12.699 "disable_auto_failback": false, 00:04:12.699 "generate_uuids": false, 00:04:12.699 "transport_tos": 0, 00:04:12.699 "nvme_error_stat": false, 00:04:12.699 "rdma_srq_size": 0, 00:04:12.699 "io_path_stat": false, 00:04:12.699 "allow_accel_sequence": false, 00:04:12.699 "rdma_max_cq_size": 0, 00:04:12.699 "rdma_cm_event_timeout_ms": 0, 00:04:12.699 "dhchap_digests": [ 00:04:12.699 "sha256", 00:04:12.699 "sha384", 00:04:12.699 "sha512" 00:04:12.699 ], 00:04:12.699 "dhchap_dhgroups": [ 00:04:12.699 "null", 00:04:12.699 "ffdhe2048", 00:04:12.699 "ffdhe3072", 00:04:12.699 "ffdhe4096", 00:04:12.699 "ffdhe6144", 00:04:12.699 "ffdhe8192" 00:04:12.699 ] 00:04:12.699 } 00:04:12.699 }, 00:04:12.699 { 00:04:12.699 "method": "bdev_nvme_set_hotplug", 00:04:12.699 "params": { 00:04:12.699 "period_us": 100000, 00:04:12.699 "enable": false 00:04:12.699 } 00:04:12.699 }, 00:04:12.699 { 00:04:12.699 "method": "bdev_wait_for_examine" 00:04:12.699 } 00:04:12.699 ] 00:04:12.699 }, 00:04:12.699 { 00:04:12.699 "subsystem": "scsi", 00:04:12.699 "config": null 00:04:12.699 }, 00:04:12.699 { 00:04:12.699 "subsystem": "scheduler", 00:04:12.699 "config": [ 00:04:12.699 { 00:04:12.699 "method": "framework_set_scheduler", 00:04:12.699 "params": { 00:04:12.699 "name": "static" 00:04:12.699 } 00:04:12.699 } 00:04:12.699 ] 00:04:12.699 }, 00:04:12.699 { 00:04:12.699 "subsystem": "vhost_scsi", 00:04:12.699 "config": [] 00:04:12.699 }, 00:04:12.699 { 00:04:12.699 "subsystem": "vhost_blk", 00:04:12.699 "config": [] 00:04:12.699 }, 00:04:12.699 { 00:04:12.699 "subsystem": "ublk", 00:04:12.699 "config": [] 00:04:12.699 }, 00:04:12.699 { 00:04:12.699 "subsystem": "nbd", 00:04:12.699 "config": [] 00:04:12.699 }, 00:04:12.699 { 00:04:12.699 "subsystem": "nvmf", 00:04:12.699 "config": [ 00:04:12.699 { 00:04:12.699 "method": "nvmf_set_config", 00:04:12.699 "params": { 00:04:12.699 "discovery_filter": "match_any", 00:04:12.699 "admin_cmd_passthru": { 00:04:12.699 "identify_ctrlr": false 00:04:12.699 } 00:04:12.699 } 00:04:12.699 }, 00:04:12.700 { 00:04:12.700 "method": "nvmf_set_max_subsystems", 00:04:12.700 "params": { 00:04:12.700 "max_subsystems": 1024 00:04:12.700 } 00:04:12.700 }, 00:04:12.700 { 00:04:12.700 "method": "nvmf_set_crdt", 00:04:12.700 "params": { 00:04:12.700 "crdt1": 0, 00:04:12.700 "crdt2": 0, 00:04:12.700 "crdt3": 0 00:04:12.700 } 00:04:12.700 }, 00:04:12.700 { 00:04:12.700 "method": "nvmf_create_transport", 00:04:12.700 "params": { 00:04:12.700 "trtype": "TCP", 00:04:12.700 "max_queue_depth": 128, 00:04:12.700 "max_io_qpairs_per_ctrlr": 127, 00:04:12.700 "in_capsule_data_size": 4096, 00:04:12.700 "max_io_size": 131072, 00:04:12.700 "io_unit_size": 131072, 00:04:12.700 "max_aq_depth": 128, 00:04:12.700 "num_shared_buffers": 511, 00:04:12.700 "buf_cache_size": 4294967295, 00:04:12.700 "dif_insert_or_strip": false, 00:04:12.700 "zcopy": false, 00:04:12.700 "c2h_success": true, 00:04:12.700 "sock_priority": 0, 00:04:12.700 "abort_timeout_sec": 1, 00:04:12.700 "ack_timeout": 0, 00:04:12.700 "data_wr_pool_size": 0 00:04:12.700 } 00:04:12.700 } 00:04:12.700 ] 00:04:12.700 }, 00:04:12.700 { 00:04:12.700 "subsystem": "iscsi", 00:04:12.700 "config": [ 00:04:12.700 { 00:04:12.700 "method": "iscsi_set_options", 00:04:12.700 "params": { 00:04:12.700 "node_base": "iqn.2016-06.io.spdk", 00:04:12.700 "max_sessions": 128, 00:04:12.700 "max_connections_per_session": 2, 00:04:12.700 "max_queue_depth": 64, 00:04:12.700 "default_time2wait": 2, 00:04:12.700 "default_time2retain": 20, 00:04:12.700 "first_burst_length": 8192, 00:04:12.700 "immediate_data": true, 00:04:12.700 "allow_duplicated_isid": false, 00:04:12.700 "error_recovery_level": 0, 00:04:12.700 "nop_timeout": 60, 00:04:12.700 "nop_in_interval": 30, 00:04:12.700 "disable_chap": false, 00:04:12.700 "require_chap": false, 00:04:12.700 "mutual_chap": false, 00:04:12.700 "chap_group": 0, 00:04:12.700 "max_large_datain_per_connection": 64, 00:04:12.700 "max_r2t_per_connection": 4, 00:04:12.700 "pdu_pool_size": 36864, 00:04:12.700 "immediate_data_pool_size": 16384, 00:04:12.700 "data_out_pool_size": 2048 00:04:12.700 } 00:04:12.700 } 00:04:12.700 ] 00:04:12.700 } 00:04:12.700 ] 00:04:12.700 } 00:04:12.700 01:59:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:12.700 01:59:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 59096 00:04:12.700 01:59:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 59096 ']' 00:04:12.700 01:59:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 59096 00:04:12.700 01:59:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:04:12.700 01:59:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:12.700 01:59:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 59096 00:04:12.700 killing process with pid 59096 00:04:12.700 01:59:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:12.700 01:59:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:12.700 01:59:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 59096' 00:04:12.700 01:59:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 59096 00:04:12.700 01:59:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 59096 00:04:15.236 01:59:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=59141 00:04:15.236 01:59:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:15.236 01:59:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:20.501 01:59:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 59141 00:04:20.501 01:59:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 59141 ']' 00:04:20.501 01:59:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 59141 00:04:20.501 01:59:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:04:20.501 01:59:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:20.501 01:59:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 59141 00:04:20.501 killing process with pid 59141 00:04:20.501 01:59:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:20.501 01:59:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:20.501 01:59:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 59141' 00:04:20.501 01:59:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 59141 00:04:20.501 01:59:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 59141 00:04:21.877 01:59:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:21.877 01:59:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:21.877 00:04:21.877 real 0m10.424s 00:04:21.877 user 0m9.619s 00:04:21.877 sys 0m1.108s 00:04:21.877 ************************************ 00:04:21.877 END TEST skip_rpc_with_json 00:04:21.877 ************************************ 00:04:21.877 01:59:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:21.877 01:59:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:21.877 01:59:30 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:21.877 01:59:30 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:21.877 01:59:30 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:21.877 01:59:30 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:21.877 01:59:30 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:21.877 ************************************ 00:04:21.877 START TEST skip_rpc_with_delay 00:04:21.877 ************************************ 00:04:21.877 01:59:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_delay 00:04:21.877 01:59:30 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:21.877 01:59:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:04:21.877 01:59:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:21.877 01:59:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:21.877 01:59:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:21.877 01:59:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:21.877 01:59:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:21.877 01:59:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:21.877 01:59:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:21.877 01:59:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:21.877 01:59:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:21.877 01:59:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:22.136 [2024-07-23 01:59:30.679257] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:22.136 [2024-07-23 01:59:30.679444] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:04:22.136 01:59:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:04:22.136 01:59:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:22.136 01:59:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:04:22.136 01:59:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:22.136 00:04:22.136 real 0m0.248s 00:04:22.136 user 0m0.125s 00:04:22.136 sys 0m0.119s 00:04:22.136 ************************************ 00:04:22.136 END TEST skip_rpc_with_delay 00:04:22.136 ************************************ 00:04:22.136 01:59:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:22.136 01:59:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:22.136 01:59:30 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:22.136 01:59:30 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:22.136 01:59:30 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:22.136 01:59:30 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:22.136 01:59:30 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:22.136 01:59:30 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:22.136 01:59:30 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:22.136 ************************************ 00:04:22.136 START TEST exit_on_failed_rpc_init 00:04:22.136 ************************************ 00:04:22.136 01:59:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1123 -- # test_exit_on_failed_rpc_init 00:04:22.136 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:22.136 01:59:30 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=59269 00:04:22.136 01:59:30 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 59269 00:04:22.136 01:59:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@829 -- # '[' -z 59269 ']' 00:04:22.136 01:59:30 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:22.136 01:59:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:22.136 01:59:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:22.136 01:59:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:22.136 01:59:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:22.136 01:59:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:22.394 [2024-07-23 01:59:30.992636] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:04:22.395 [2024-07-23 01:59:30.992854] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59269 ] 00:04:22.395 [2024-07-23 01:59:31.168328] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:22.653 [2024-07-23 01:59:31.381751] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:23.589 01:59:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:23.589 01:59:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # return 0 00:04:23.589 01:59:32 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:23.589 01:59:32 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:23.589 01:59:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:04:23.589 01:59:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:23.589 01:59:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:23.589 01:59:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:23.589 01:59:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:23.589 01:59:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:23.589 01:59:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:23.589 01:59:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:23.589 01:59:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:23.590 01:59:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:23.590 01:59:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:23.590 [2024-07-23 01:59:32.290131] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:04:23.590 [2024-07-23 01:59:32.290338] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59297 ] 00:04:23.848 [2024-07-23 01:59:32.458572] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:24.107 [2024-07-23 01:59:32.721896] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:24.107 [2024-07-23 01:59:32.722361] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:24.107 [2024-07-23 01:59:32.722466] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:24.107 [2024-07-23 01:59:32.722581] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:24.367 01:59:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:04:24.367 01:59:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:24.367 01:59:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:04:24.367 01:59:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:04:24.367 01:59:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:04:24.367 01:59:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:24.367 01:59:33 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:24.367 01:59:33 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 59269 00:04:24.367 01:59:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@948 -- # '[' -z 59269 ']' 00:04:24.367 01:59:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # kill -0 59269 00:04:24.367 01:59:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # uname 00:04:24.367 01:59:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:24.367 01:59:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 59269 00:04:24.367 killing process with pid 59269 00:04:24.367 01:59:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:24.367 01:59:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:24.367 01:59:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 59269' 00:04:24.367 01:59:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@967 -- # kill 59269 00:04:24.367 01:59:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # wait 59269 00:04:26.901 ************************************ 00:04:26.901 END TEST exit_on_failed_rpc_init 00:04:26.901 ************************************ 00:04:26.901 00:04:26.901 real 0m4.257s 00:04:26.901 user 0m4.729s 00:04:26.901 sys 0m0.742s 00:04:26.901 01:59:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:26.901 01:59:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:26.901 01:59:35 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:26.901 01:59:35 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:26.901 00:04:26.901 real 0m22.268s 00:04:26.901 user 0m20.965s 00:04:26.901 sys 0m2.698s 00:04:26.901 ************************************ 00:04:26.901 END TEST skip_rpc 00:04:26.901 ************************************ 00:04:26.901 01:59:35 skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:26.901 01:59:35 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:26.901 01:59:35 -- common/autotest_common.sh@1142 -- # return 0 00:04:26.901 01:59:35 -- spdk/autotest.sh@171 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:26.901 01:59:35 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:26.901 01:59:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:26.901 01:59:35 -- common/autotest_common.sh@10 -- # set +x 00:04:26.901 ************************************ 00:04:26.901 START TEST rpc_client 00:04:26.901 ************************************ 00:04:26.901 01:59:35 rpc_client -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:26.901 * Looking for test storage... 00:04:26.901 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:04:26.901 01:59:35 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:04:26.901 OK 00:04:26.901 01:59:35 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:26.901 00:04:26.901 real 0m0.149s 00:04:26.901 user 0m0.082s 00:04:26.901 sys 0m0.071s 00:04:26.901 01:59:35 rpc_client -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:26.901 01:59:35 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:26.901 ************************************ 00:04:26.901 END TEST rpc_client 00:04:26.901 ************************************ 00:04:26.901 01:59:35 -- common/autotest_common.sh@1142 -- # return 0 00:04:26.901 01:59:35 -- spdk/autotest.sh@172 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:26.901 01:59:35 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:26.901 01:59:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:26.901 01:59:35 -- common/autotest_common.sh@10 -- # set +x 00:04:26.901 ************************************ 00:04:26.901 START TEST json_config 00:04:26.901 ************************************ 00:04:26.901 01:59:35 json_config -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:26.901 01:59:35 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:26.901 01:59:35 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:26.901 01:59:35 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:26.901 01:59:35 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:26.901 01:59:35 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:26.901 01:59:35 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:26.901 01:59:35 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:26.901 01:59:35 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:26.901 01:59:35 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:26.901 01:59:35 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:26.901 01:59:35 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:26.901 01:59:35 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:26.901 01:59:35 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f54a7147-29b2-4915-ad44-5b62f2934558 00:04:26.901 01:59:35 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=f54a7147-29b2-4915-ad44-5b62f2934558 00:04:26.901 01:59:35 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:26.901 01:59:35 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:26.901 01:59:35 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:26.901 01:59:35 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:26.901 01:59:35 json_config -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:26.901 01:59:35 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:26.901 01:59:35 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:26.901 01:59:35 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:26.901 01:59:35 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:26.902 01:59:35 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:26.902 01:59:35 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:26.902 01:59:35 json_config -- paths/export.sh@5 -- # export PATH 00:04:26.902 01:59:35 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:26.902 01:59:35 json_config -- nvmf/common.sh@47 -- # : 0 00:04:26.902 01:59:35 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:26.902 01:59:35 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:26.902 01:59:35 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:26.902 01:59:35 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:26.902 01:59:35 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:26.902 01:59:35 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:26.902 01:59:35 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:26.902 01:59:35 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:26.902 01:59:35 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:26.902 01:59:35 json_config -- json_config/json_config.sh@11 -- # [[ 1 -eq 1 ]] 00:04:26.902 01:59:35 json_config -- json_config/json_config.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:04:26.902 01:59:35 json_config -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:04:26.902 01:59:35 json_config -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:04:26.902 01:59:35 json_config -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:04:26.902 01:59:35 json_config -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:04:26.902 01:59:35 json_config -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:04:26.902 01:59:35 json_config -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:04:26.902 01:59:35 json_config -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:04:26.902 01:59:35 json_config -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:04:26.902 01:59:35 json_config -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:04:26.902 01:59:35 json_config -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:04:26.902 01:59:35 json_config -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:04:26.902 01:59:35 json_config -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:04:26.902 01:59:35 json_config -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:04:26.902 01:59:35 json_config -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:04:26.902 01:59:35 json_config -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:04:26.902 01:59:35 json_config -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:04:26.902 01:59:35 json_config -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:04:26.902 01:59:35 json_config -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:04:26.902 01:59:35 json_config -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:04:26.902 01:59:35 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:26.902 01:59:35 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:26.902 01:59:35 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:26.902 01:59:35 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:04:26.902 01:59:35 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:04:26.902 01:59:35 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:26.902 01:59:35 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:04:26.902 01:59:35 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:26.902 01:59:35 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:04:26.902 01:59:35 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:04:26.902 01:59:35 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:04:26.902 01:59:35 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:04:26.902 01:59:35 json_config -- json_config/json_config.sh@359 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:26.902 INFO: JSON configuration test init 00:04:26.902 01:59:35 json_config -- json_config/json_config.sh@360 -- # echo 'INFO: JSON configuration test init' 00:04:26.902 01:59:35 json_config -- json_config/json_config.sh@361 -- # json_config_test_init 00:04:26.902 01:59:35 json_config -- json_config/json_config.sh@266 -- # timing_enter json_config_test_init 00:04:26.902 01:59:35 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:26.902 01:59:35 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:26.902 01:59:35 json_config -- json_config/json_config.sh@267 -- # timing_enter json_config_setup_target 00:04:26.902 01:59:35 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:26.902 01:59:35 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:26.902 01:59:35 json_config -- json_config/json_config.sh@269 -- # json_config_test_start_app target --wait-for-rpc 00:04:26.902 01:59:35 json_config -- json_config/common.sh@9 -- # local app=target 00:04:26.902 01:59:35 json_config -- json_config/common.sh@10 -- # shift 00:04:26.902 01:59:35 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:26.902 01:59:35 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:26.902 01:59:35 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:26.902 01:59:35 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:26.902 01:59:35 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:26.902 01:59:35 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=59441 00:04:26.902 Waiting for target to run... 00:04:26.902 01:59:35 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:26.902 01:59:35 json_config -- json_config/common.sh@25 -- # waitforlisten 59441 /var/tmp/spdk_tgt.sock 00:04:26.902 01:59:35 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:26.902 01:59:35 json_config -- common/autotest_common.sh@829 -- # '[' -z 59441 ']' 00:04:26.902 01:59:35 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:26.902 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:26.902 01:59:35 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:26.902 01:59:35 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:26.902 01:59:35 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:26.902 01:59:35 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:26.902 [2024-07-23 01:59:35.632212] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:04:26.902 [2024-07-23 01:59:35.632975] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59441 ] 00:04:27.470 [2024-07-23 01:59:36.111698] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:27.743 [2024-07-23 01:59:36.298992] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:27.743 01:59:36 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:27.743 00:04:27.743 01:59:36 json_config -- common/autotest_common.sh@862 -- # return 0 00:04:27.743 01:59:36 json_config -- json_config/common.sh@26 -- # echo '' 00:04:27.744 01:59:36 json_config -- json_config/json_config.sh@273 -- # create_accel_config 00:04:27.744 01:59:36 json_config -- json_config/json_config.sh@97 -- # timing_enter create_accel_config 00:04:27.744 01:59:36 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:27.744 01:59:36 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:27.744 01:59:36 json_config -- json_config/json_config.sh@99 -- # [[ 0 -eq 1 ]] 00:04:27.744 01:59:36 json_config -- json_config/json_config.sh@105 -- # timing_exit create_accel_config 00:04:27.744 01:59:36 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:27.744 01:59:36 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:28.002 01:59:36 json_config -- json_config/json_config.sh@277 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:28.002 01:59:36 json_config -- json_config/json_config.sh@278 -- # tgt_rpc load_config 00:04:28.002 01:59:36 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:04:28.939 01:59:37 json_config -- json_config/json_config.sh@280 -- # tgt_check_notification_types 00:04:28.939 01:59:37 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:04:28.939 01:59:37 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:28.939 01:59:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:28.940 01:59:37 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:04:28.940 01:59:37 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:04:28.940 01:59:37 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:04:28.940 01:59:37 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:04:28.940 01:59:37 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:04:28.940 01:59:37 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:04:28.940 01:59:37 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:04:28.940 01:59:37 json_config -- json_config/json_config.sh@48 -- # local get_types 00:04:28.940 01:59:37 json_config -- json_config/json_config.sh@50 -- # local type_diff 00:04:28.940 01:59:37 json_config -- json_config/json_config.sh@51 -- # echo bdev_register bdev_unregister bdev_register bdev_unregister 00:04:28.940 01:59:37 json_config -- json_config/json_config.sh@51 -- # tr ' ' '\n' 00:04:28.940 01:59:37 json_config -- json_config/json_config.sh@51 -- # sort 00:04:28.940 01:59:37 json_config -- json_config/json_config.sh@51 -- # uniq -u 00:04:28.940 01:59:37 json_config -- json_config/json_config.sh@51 -- # type_diff= 00:04:28.940 01:59:37 json_config -- json_config/json_config.sh@53 -- # [[ -n '' ]] 00:04:28.940 01:59:37 json_config -- json_config/json_config.sh@58 -- # timing_exit tgt_check_notification_types 00:04:28.940 01:59:37 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:28.940 01:59:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:28.940 01:59:37 json_config -- json_config/json_config.sh@59 -- # return 0 00:04:28.940 01:59:37 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:04:28.940 01:59:37 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:04:28.940 01:59:37 json_config -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:04:28.940 01:59:37 json_config -- json_config/json_config.sh@291 -- # create_iscsi_subsystem_config 00:04:28.940 01:59:37 json_config -- json_config/json_config.sh@225 -- # timing_enter create_iscsi_subsystem_config 00:04:28.940 01:59:37 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:28.940 01:59:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:28.940 01:59:37 json_config -- json_config/json_config.sh@226 -- # tgt_rpc bdev_malloc_create 64 1024 --name MallocForIscsi0 00:04:28.940 01:59:37 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 64 1024 --name MallocForIscsi0 00:04:29.508 MallocForIscsi0 00:04:29.508 01:59:38 json_config -- json_config/json_config.sh@227 -- # tgt_rpc iscsi_create_portal_group 1 127.0.0.1:3260 00:04:29.508 01:59:38 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock iscsi_create_portal_group 1 127.0.0.1:3260 00:04:29.508 01:59:38 json_config -- json_config/json_config.sh@228 -- # tgt_rpc iscsi_create_initiator_group 2 ANY 10.0.0.2/32 00:04:29.508 01:59:38 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock iscsi_create_initiator_group 2 ANY 10.0.0.2/32 00:04:29.767 01:59:38 json_config -- json_config/json_config.sh@229 -- # tgt_rpc iscsi_create_target_node Target3 Target3_alias MallocForIscsi0:0 1:2 64 -d 00:04:29.767 01:59:38 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock iscsi_create_target_node Target3 Target3_alias MallocForIscsi0:0 1:2 64 -d 00:04:30.026 01:59:38 json_config -- json_config/json_config.sh@230 -- # timing_exit create_iscsi_subsystem_config 00:04:30.026 01:59:38 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:30.026 01:59:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:30.026 01:59:38 json_config -- json_config/json_config.sh@294 -- # [[ 0 -eq 1 ]] 00:04:30.026 01:59:38 json_config -- json_config/json_config.sh@297 -- # timing_exit json_config_setup_target 00:04:30.026 01:59:38 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:30.026 01:59:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:30.026 01:59:38 json_config -- json_config/json_config.sh@299 -- # [[ 0 -eq 1 ]] 00:04:30.026 01:59:38 json_config -- json_config/json_config.sh@304 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:30.026 01:59:38 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:30.286 MallocBdevForConfigChangeCheck 00:04:30.286 01:59:38 json_config -- json_config/json_config.sh@306 -- # timing_exit json_config_test_init 00:04:30.286 01:59:38 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:30.286 01:59:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:30.286 01:59:39 json_config -- json_config/json_config.sh@363 -- # tgt_rpc save_config 00:04:30.286 01:59:39 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:30.854 INFO: shutting down applications... 00:04:30.854 01:59:39 json_config -- json_config/json_config.sh@365 -- # echo 'INFO: shutting down applications...' 00:04:30.854 01:59:39 json_config -- json_config/json_config.sh@366 -- # [[ 0 -eq 1 ]] 00:04:30.854 01:59:39 json_config -- json_config/json_config.sh@372 -- # json_config_clear target 00:04:30.854 01:59:39 json_config -- json_config/json_config.sh@336 -- # [[ -n 22 ]] 00:04:30.854 01:59:39 json_config -- json_config/json_config.sh@337 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:04:31.119 Calling clear_iscsi_subsystem 00:04:31.119 Calling clear_nvmf_subsystem 00:04:31.119 Calling clear_nbd_subsystem 00:04:31.119 Calling clear_ublk_subsystem 00:04:31.119 Calling clear_vhost_blk_subsystem 00:04:31.119 Calling clear_vhost_scsi_subsystem 00:04:31.119 Calling clear_bdev_subsystem 00:04:31.119 01:59:39 json_config -- json_config/json_config.sh@341 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:04:31.119 01:59:39 json_config -- json_config/json_config.sh@347 -- # count=100 00:04:31.119 01:59:39 json_config -- json_config/json_config.sh@348 -- # '[' 100 -gt 0 ']' 00:04:31.119 01:59:39 json_config -- json_config/json_config.sh@349 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:31.119 01:59:39 json_config -- json_config/json_config.sh@349 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:04:31.119 01:59:39 json_config -- json_config/json_config.sh@349 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:04:31.413 01:59:40 json_config -- json_config/json_config.sh@349 -- # break 00:04:31.413 01:59:40 json_config -- json_config/json_config.sh@354 -- # '[' 100 -eq 0 ']' 00:04:31.413 01:59:40 json_config -- json_config/json_config.sh@373 -- # json_config_test_shutdown_app target 00:04:31.413 01:59:40 json_config -- json_config/common.sh@31 -- # local app=target 00:04:31.413 01:59:40 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:31.413 01:59:40 json_config -- json_config/common.sh@35 -- # [[ -n 59441 ]] 00:04:31.413 01:59:40 json_config -- json_config/common.sh@38 -- # kill -SIGINT 59441 00:04:31.413 01:59:40 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:31.413 01:59:40 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:31.413 01:59:40 json_config -- json_config/common.sh@41 -- # kill -0 59441 00:04:31.413 01:59:40 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:04:31.995 01:59:40 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:04:31.995 01:59:40 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:31.995 01:59:40 json_config -- json_config/common.sh@41 -- # kill -0 59441 00:04:31.995 01:59:40 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:04:32.564 01:59:41 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:04:32.564 01:59:41 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:32.564 01:59:41 json_config -- json_config/common.sh@41 -- # kill -0 59441 00:04:32.564 01:59:41 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:32.564 SPDK target shutdown done 00:04:32.564 01:59:41 json_config -- json_config/common.sh@43 -- # break 00:04:32.564 01:59:41 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:32.564 01:59:41 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:32.564 INFO: relaunching applications... 00:04:32.564 01:59:41 json_config -- json_config/json_config.sh@375 -- # echo 'INFO: relaunching applications...' 00:04:32.564 01:59:41 json_config -- json_config/json_config.sh@376 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:32.564 01:59:41 json_config -- json_config/common.sh@9 -- # local app=target 00:04:32.564 01:59:41 json_config -- json_config/common.sh@10 -- # shift 00:04:32.564 01:59:41 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:32.564 01:59:41 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:32.564 01:59:41 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:32.564 01:59:41 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:32.564 01:59:41 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:32.564 01:59:41 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=59635 00:04:32.564 01:59:41 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:32.564 Waiting for target to run... 00:04:32.564 01:59:41 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:32.564 01:59:41 json_config -- json_config/common.sh@25 -- # waitforlisten 59635 /var/tmp/spdk_tgt.sock 00:04:32.564 01:59:41 json_config -- common/autotest_common.sh@829 -- # '[' -z 59635 ']' 00:04:32.564 01:59:41 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:32.564 01:59:41 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:32.564 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:32.564 01:59:41 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:32.564 01:59:41 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:32.564 01:59:41 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:32.823 [2024-07-23 01:59:41.352348] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:04:32.823 [2024-07-23 01:59:41.352536] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59635 ] 00:04:33.083 [2024-07-23 01:59:41.845238] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:33.342 [2024-07-23 01:59:42.037355] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:34.279 01:59:42 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:34.279 00:04:34.279 01:59:42 json_config -- common/autotest_common.sh@862 -- # return 0 00:04:34.279 01:59:42 json_config -- json_config/common.sh@26 -- # echo '' 00:04:34.279 INFO: Checking if target configuration is the same... 00:04:34.279 01:59:42 json_config -- json_config/json_config.sh@377 -- # [[ 0 -eq 1 ]] 00:04:34.279 01:59:42 json_config -- json_config/json_config.sh@381 -- # echo 'INFO: Checking if target configuration is the same...' 00:04:34.279 01:59:42 json_config -- json_config/json_config.sh@382 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:34.279 01:59:42 json_config -- json_config/json_config.sh@382 -- # tgt_rpc save_config 00:04:34.279 01:59:42 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:34.279 + '[' 2 -ne 2 ']' 00:04:34.279 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:04:34.279 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:04:34.279 + rootdir=/home/vagrant/spdk_repo/spdk 00:04:34.279 +++ basename /dev/fd/62 00:04:34.279 ++ mktemp /tmp/62.XXX 00:04:34.279 + tmp_file_1=/tmp/62.cUN 00:04:34.279 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:34.279 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:34.279 + tmp_file_2=/tmp/spdk_tgt_config.json.0OK 00:04:34.279 + ret=0 00:04:34.279 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:34.538 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:34.797 + diff -u /tmp/62.cUN /tmp/spdk_tgt_config.json.0OK 00:04:34.797 INFO: JSON config files are the same 00:04:34.797 + echo 'INFO: JSON config files are the same' 00:04:34.797 + rm /tmp/62.cUN /tmp/spdk_tgt_config.json.0OK 00:04:34.797 + exit 0 00:04:34.797 01:59:43 json_config -- json_config/json_config.sh@383 -- # [[ 0 -eq 1 ]] 00:04:34.797 INFO: changing configuration and checking if this can be detected... 00:04:34.797 01:59:43 json_config -- json_config/json_config.sh@388 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:04:34.797 01:59:43 json_config -- json_config/json_config.sh@390 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:34.797 01:59:43 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:35.056 01:59:43 json_config -- json_config/json_config.sh@391 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:35.056 01:59:43 json_config -- json_config/json_config.sh@391 -- # tgt_rpc save_config 00:04:35.056 01:59:43 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:35.056 + '[' 2 -ne 2 ']' 00:04:35.056 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:04:35.056 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:04:35.056 + rootdir=/home/vagrant/spdk_repo/spdk 00:04:35.056 +++ basename /dev/fd/62 00:04:35.056 ++ mktemp /tmp/62.XXX 00:04:35.056 + tmp_file_1=/tmp/62.bCx 00:04:35.056 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:35.056 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:35.056 + tmp_file_2=/tmp/spdk_tgt_config.json.S6R 00:04:35.056 + ret=0 00:04:35.056 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:35.314 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:35.314 + diff -u /tmp/62.bCx /tmp/spdk_tgt_config.json.S6R 00:04:35.314 + ret=1 00:04:35.314 + echo '=== Start of file: /tmp/62.bCx ===' 00:04:35.314 + cat /tmp/62.bCx 00:04:35.314 + echo '=== End of file: /tmp/62.bCx ===' 00:04:35.314 + echo '' 00:04:35.314 + echo '=== Start of file: /tmp/spdk_tgt_config.json.S6R ===' 00:04:35.314 + cat /tmp/spdk_tgt_config.json.S6R 00:04:35.314 + echo '=== End of file: /tmp/spdk_tgt_config.json.S6R ===' 00:04:35.314 + echo '' 00:04:35.314 + rm /tmp/62.bCx /tmp/spdk_tgt_config.json.S6R 00:04:35.314 + exit 1 00:04:35.314 INFO: configuration change detected. 00:04:35.314 01:59:44 json_config -- json_config/json_config.sh@395 -- # echo 'INFO: configuration change detected.' 00:04:35.314 01:59:44 json_config -- json_config/json_config.sh@398 -- # json_config_test_fini 00:04:35.314 01:59:44 json_config -- json_config/json_config.sh@310 -- # timing_enter json_config_test_fini 00:04:35.314 01:59:44 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:35.314 01:59:44 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:35.314 01:59:44 json_config -- json_config/json_config.sh@311 -- # local ret=0 00:04:35.314 01:59:44 json_config -- json_config/json_config.sh@313 -- # [[ -n '' ]] 00:04:35.314 01:59:44 json_config -- json_config/json_config.sh@321 -- # [[ -n 59635 ]] 00:04:35.314 01:59:44 json_config -- json_config/json_config.sh@324 -- # cleanup_bdev_subsystem_config 00:04:35.314 01:59:44 json_config -- json_config/json_config.sh@188 -- # timing_enter cleanup_bdev_subsystem_config 00:04:35.314 01:59:44 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:35.314 01:59:44 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:35.314 01:59:44 json_config -- json_config/json_config.sh@190 -- # [[ 0 -eq 1 ]] 00:04:35.314 01:59:44 json_config -- json_config/json_config.sh@197 -- # uname -s 00:04:35.314 01:59:44 json_config -- json_config/json_config.sh@197 -- # [[ Linux = Linux ]] 00:04:35.314 01:59:44 json_config -- json_config/json_config.sh@198 -- # rm -f /sample_aio 00:04:35.314 01:59:44 json_config -- json_config/json_config.sh@201 -- # [[ 1 -eq 1 ]] 00:04:35.314 01:59:44 json_config -- json_config/json_config.sh@202 -- # rbd_cleanup 00:04:35.314 01:59:44 json_config -- common/autotest_common.sh@1031 -- # hash ceph 00:04:35.314 01:59:44 json_config -- common/autotest_common.sh@1032 -- # /home/vagrant/spdk_repo/spdk/scripts/ceph/stop.sh 00:04:35.314 + base_dir=/var/tmp/ceph 00:04:35.314 + image=/var/tmp/ceph/ceph_raw.img 00:04:35.314 + dev=/dev/loop200 00:04:35.314 + pkill -9 ceph 00:04:35.572 + sleep 3 00:04:38.857 + umount /dev/loop200p2 00:04:38.857 umount: /dev/loop200p2: no mount point specified. 00:04:38.857 + losetup -d /dev/loop200 00:04:38.857 losetup: /dev/loop200: failed to use device: No such device 00:04:38.857 + rm -rf /var/tmp/ceph 00:04:38.857 01:59:47 json_config -- common/autotest_common.sh@1033 -- # rm -f /var/tmp/ceph_raw.img 00:04:38.857 01:59:47 json_config -- json_config/json_config.sh@205 -- # timing_exit cleanup_bdev_subsystem_config 00:04:38.857 01:59:47 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:38.857 01:59:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:38.857 01:59:47 json_config -- json_config/json_config.sh@327 -- # killprocess 59635 00:04:38.857 01:59:47 json_config -- common/autotest_common.sh@948 -- # '[' -z 59635 ']' 00:04:38.857 01:59:47 json_config -- common/autotest_common.sh@952 -- # kill -0 59635 00:04:38.857 01:59:47 json_config -- common/autotest_common.sh@953 -- # uname 00:04:38.857 01:59:47 json_config -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:38.857 01:59:47 json_config -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 59635 00:04:38.857 01:59:47 json_config -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:38.857 killing process with pid 59635 00:04:38.857 01:59:47 json_config -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:38.857 01:59:47 json_config -- common/autotest_common.sh@966 -- # echo 'killing process with pid 59635' 00:04:38.857 01:59:47 json_config -- common/autotest_common.sh@967 -- # kill 59635 00:04:38.857 01:59:47 json_config -- common/autotest_common.sh@972 -- # wait 59635 00:04:39.424 01:59:48 json_config -- json_config/json_config.sh@330 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:39.424 01:59:48 json_config -- json_config/json_config.sh@331 -- # timing_exit json_config_test_fini 00:04:39.424 01:59:48 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:39.424 01:59:48 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:39.424 01:59:48 json_config -- json_config/json_config.sh@332 -- # return 0 00:04:39.424 INFO: Success 00:04:39.424 01:59:48 json_config -- json_config/json_config.sh@400 -- # echo 'INFO: Success' 00:04:39.424 ************************************ 00:04:39.424 END TEST json_config 00:04:39.424 ************************************ 00:04:39.424 00:04:39.424 real 0m12.718s 00:04:39.424 user 0m15.220s 00:04:39.424 sys 0m2.116s 00:04:39.424 01:59:48 json_config -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:39.424 01:59:48 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:39.424 01:59:48 -- common/autotest_common.sh@1142 -- # return 0 00:04:39.424 01:59:48 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:39.424 01:59:48 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:39.424 01:59:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:39.424 01:59:48 -- common/autotest_common.sh@10 -- # set +x 00:04:39.424 ************************************ 00:04:39.424 START TEST json_config_extra_key 00:04:39.424 ************************************ 00:04:39.424 01:59:48 json_config_extra_key -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:39.424 01:59:48 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:39.424 01:59:48 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:39.424 01:59:48 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:39.424 01:59:48 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:39.424 01:59:48 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:39.424 01:59:48 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:39.424 01:59:48 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:39.424 01:59:48 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:39.424 01:59:48 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:39.424 01:59:48 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:39.424 01:59:48 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:39.424 01:59:48 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:39.683 01:59:48 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f54a7147-29b2-4915-ad44-5b62f2934558 00:04:39.683 01:59:48 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=f54a7147-29b2-4915-ad44-5b62f2934558 00:04:39.683 01:59:48 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:39.683 01:59:48 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:39.683 01:59:48 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:39.683 01:59:48 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:39.683 01:59:48 json_config_extra_key -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:39.683 01:59:48 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:39.683 01:59:48 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:39.683 01:59:48 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:39.683 01:59:48 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:39.683 01:59:48 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:39.683 01:59:48 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:39.684 01:59:48 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:39.684 01:59:48 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:39.684 01:59:48 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:04:39.684 01:59:48 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:39.684 01:59:48 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:39.684 01:59:48 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:39.684 01:59:48 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:39.684 01:59:48 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:39.684 01:59:48 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:39.684 01:59:48 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:39.684 01:59:48 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:39.684 01:59:48 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:39.684 01:59:48 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:39.684 01:59:48 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:39.684 01:59:48 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:39.684 01:59:48 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:39.684 01:59:48 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:39.684 01:59:48 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:39.684 01:59:48 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:04:39.684 01:59:48 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:39.684 01:59:48 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:39.684 INFO: launching applications... 00:04:39.684 01:59:48 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:39.684 01:59:48 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:39.684 01:59:48 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:39.684 01:59:48 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:39.684 01:59:48 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:39.684 01:59:48 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:39.684 01:59:48 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:39.684 01:59:48 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:39.684 01:59:48 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:39.684 01:59:48 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=59830 00:04:39.684 01:59:48 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:39.684 Waiting for target to run... 00:04:39.684 01:59:48 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:39.684 01:59:48 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 59830 /var/tmp/spdk_tgt.sock 00:04:39.684 01:59:48 json_config_extra_key -- common/autotest_common.sh@829 -- # '[' -z 59830 ']' 00:04:39.684 01:59:48 json_config_extra_key -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:39.684 01:59:48 json_config_extra_key -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:39.684 01:59:48 json_config_extra_key -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:39.684 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:39.684 01:59:48 json_config_extra_key -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:39.684 01:59:48 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:39.684 [2024-07-23 01:59:48.386952] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:04:39.684 [2024-07-23 01:59:48.387172] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59830 ] 00:04:40.252 [2024-07-23 01:59:48.834262] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:40.252 [2024-07-23 01:59:49.015695] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:40.820 01:59:49 json_config_extra_key -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:40.820 01:59:49 json_config_extra_key -- common/autotest_common.sh@862 -- # return 0 00:04:40.820 00:04:40.820 01:59:49 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:40.820 INFO: shutting down applications... 00:04:40.820 01:59:49 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:40.820 01:59:49 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:40.820 01:59:49 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:40.820 01:59:49 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:40.820 01:59:49 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 59830 ]] 00:04:40.820 01:59:49 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 59830 00:04:40.820 01:59:49 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:40.820 01:59:49 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:40.820 01:59:49 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59830 00:04:40.820 01:59:49 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:41.388 01:59:50 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:41.388 01:59:50 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:41.388 01:59:50 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59830 00:04:41.388 01:59:50 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:41.956 01:59:50 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:41.956 01:59:50 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:41.956 01:59:50 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59830 00:04:41.956 01:59:50 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:42.524 01:59:51 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:42.524 01:59:51 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:42.524 01:59:51 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59830 00:04:42.524 01:59:51 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:42.783 01:59:51 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:42.783 01:59:51 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:42.783 01:59:51 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59830 00:04:42.783 01:59:51 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:43.352 01:59:52 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:43.352 01:59:52 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:43.352 01:59:52 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59830 00:04:43.352 01:59:52 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:43.352 01:59:52 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:43.352 01:59:52 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:43.352 SPDK target shutdown done 00:04:43.352 01:59:52 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:43.352 Success 00:04:43.352 01:59:52 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:43.352 00:04:43.352 real 0m3.916s 00:04:43.352 user 0m3.302s 00:04:43.352 sys 0m0.584s 00:04:43.352 01:59:52 json_config_extra_key -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:43.352 01:59:52 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:43.352 ************************************ 00:04:43.352 END TEST json_config_extra_key 00:04:43.352 ************************************ 00:04:43.352 01:59:52 -- common/autotest_common.sh@1142 -- # return 0 00:04:43.352 01:59:52 -- spdk/autotest.sh@174 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:43.352 01:59:52 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:43.352 01:59:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:43.352 01:59:52 -- common/autotest_common.sh@10 -- # set +x 00:04:43.352 ************************************ 00:04:43.352 START TEST alias_rpc 00:04:43.352 ************************************ 00:04:43.352 01:59:52 alias_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:43.611 * Looking for test storage... 00:04:43.611 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:04:43.611 01:59:52 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:43.611 01:59:52 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=59927 00:04:43.611 01:59:52 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 59927 00:04:43.611 01:59:52 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:43.611 01:59:52 alias_rpc -- common/autotest_common.sh@829 -- # '[' -z 59927 ']' 00:04:43.611 01:59:52 alias_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:43.611 01:59:52 alias_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:43.611 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:43.611 01:59:52 alias_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:43.611 01:59:52 alias_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:43.611 01:59:52 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:43.611 [2024-07-23 01:59:52.345437] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:04:43.611 [2024-07-23 01:59:52.345666] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59927 ] 00:04:43.870 [2024-07-23 01:59:52.500936] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:44.129 [2024-07-23 01:59:52.692588] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:44.696 01:59:53 alias_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:44.696 01:59:53 alias_rpc -- common/autotest_common.sh@862 -- # return 0 00:04:44.696 01:59:53 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:04:44.953 01:59:53 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 59927 00:04:44.953 01:59:53 alias_rpc -- common/autotest_common.sh@948 -- # '[' -z 59927 ']' 00:04:44.953 01:59:53 alias_rpc -- common/autotest_common.sh@952 -- # kill -0 59927 00:04:44.953 01:59:53 alias_rpc -- common/autotest_common.sh@953 -- # uname 00:04:44.953 01:59:53 alias_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:44.953 01:59:53 alias_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 59927 00:04:44.953 01:59:53 alias_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:44.953 01:59:53 alias_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:44.954 killing process with pid 59927 00:04:44.954 01:59:53 alias_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 59927' 00:04:44.954 01:59:53 alias_rpc -- common/autotest_common.sh@967 -- # kill 59927 00:04:44.954 01:59:53 alias_rpc -- common/autotest_common.sh@972 -- # wait 59927 00:04:46.855 00:04:46.855 real 0m3.334s 00:04:46.855 user 0m3.400s 00:04:46.855 sys 0m0.559s 00:04:46.855 01:59:55 alias_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:46.855 01:59:55 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:46.855 ************************************ 00:04:46.855 END TEST alias_rpc 00:04:46.855 ************************************ 00:04:46.855 01:59:55 -- common/autotest_common.sh@1142 -- # return 0 00:04:46.855 01:59:55 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:04:46.855 01:59:55 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:04:46.855 01:59:55 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:46.855 01:59:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:46.855 01:59:55 -- common/autotest_common.sh@10 -- # set +x 00:04:46.855 ************************************ 00:04:46.855 START TEST spdkcli_tcp 00:04:46.855 ************************************ 00:04:46.855 01:59:55 spdkcli_tcp -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:04:46.855 * Looking for test storage... 00:04:46.855 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:04:46.855 01:59:55 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:04:46.855 01:59:55 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:04:46.855 01:59:55 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:04:46.855 01:59:55 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:46.855 01:59:55 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:46.855 01:59:55 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:46.855 01:59:55 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:46.855 01:59:55 spdkcli_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:46.855 01:59:55 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:46.855 01:59:55 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=60025 00:04:46.855 01:59:55 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:46.855 01:59:55 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 60025 00:04:46.855 01:59:55 spdkcli_tcp -- common/autotest_common.sh@829 -- # '[' -z 60025 ']' 00:04:46.855 01:59:55 spdkcli_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:46.855 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:46.855 01:59:55 spdkcli_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:46.855 01:59:55 spdkcli_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:46.855 01:59:55 spdkcli_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:46.855 01:59:55 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:47.114 [2024-07-23 01:59:55.764789] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:04:47.114 [2024-07-23 01:59:55.765017] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60025 ] 00:04:47.373 [2024-07-23 01:59:55.938202] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:47.373 [2024-07-23 01:59:56.142444] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:47.373 [2024-07-23 01:59:56.142453] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:48.310 01:59:56 spdkcli_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:48.310 01:59:56 spdkcli_tcp -- common/autotest_common.sh@862 -- # return 0 00:04:48.310 01:59:56 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=60042 00:04:48.310 01:59:56 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:48.310 01:59:56 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:48.569 [ 00:04:48.569 "bdev_malloc_delete", 00:04:48.569 "bdev_malloc_create", 00:04:48.569 "bdev_null_resize", 00:04:48.569 "bdev_null_delete", 00:04:48.569 "bdev_null_create", 00:04:48.569 "bdev_nvme_cuse_unregister", 00:04:48.569 "bdev_nvme_cuse_register", 00:04:48.569 "bdev_opal_new_user", 00:04:48.569 "bdev_opal_set_lock_state", 00:04:48.569 "bdev_opal_delete", 00:04:48.569 "bdev_opal_get_info", 00:04:48.569 "bdev_opal_create", 00:04:48.569 "bdev_nvme_opal_revert", 00:04:48.569 "bdev_nvme_opal_init", 00:04:48.569 "bdev_nvme_send_cmd", 00:04:48.569 "bdev_nvme_get_path_iostat", 00:04:48.569 "bdev_nvme_get_mdns_discovery_info", 00:04:48.569 "bdev_nvme_stop_mdns_discovery", 00:04:48.569 "bdev_nvme_start_mdns_discovery", 00:04:48.569 "bdev_nvme_set_multipath_policy", 00:04:48.569 "bdev_nvme_set_preferred_path", 00:04:48.569 "bdev_nvme_get_io_paths", 00:04:48.569 "bdev_nvme_remove_error_injection", 00:04:48.569 "bdev_nvme_add_error_injection", 00:04:48.569 "bdev_nvme_get_discovery_info", 00:04:48.569 "bdev_nvme_stop_discovery", 00:04:48.569 "bdev_nvme_start_discovery", 00:04:48.569 "bdev_nvme_get_controller_health_info", 00:04:48.569 "bdev_nvme_disable_controller", 00:04:48.570 "bdev_nvme_enable_controller", 00:04:48.570 "bdev_nvme_reset_controller", 00:04:48.570 "bdev_nvme_get_transport_statistics", 00:04:48.570 "bdev_nvme_apply_firmware", 00:04:48.570 "bdev_nvme_detach_controller", 00:04:48.570 "bdev_nvme_get_controllers", 00:04:48.570 "bdev_nvme_attach_controller", 00:04:48.570 "bdev_nvme_set_hotplug", 00:04:48.570 "bdev_nvme_set_options", 00:04:48.570 "bdev_passthru_delete", 00:04:48.570 "bdev_passthru_create", 00:04:48.570 "bdev_lvol_set_parent_bdev", 00:04:48.570 "bdev_lvol_set_parent", 00:04:48.570 "bdev_lvol_check_shallow_copy", 00:04:48.570 "bdev_lvol_start_shallow_copy", 00:04:48.570 "bdev_lvol_grow_lvstore", 00:04:48.570 "bdev_lvol_get_lvols", 00:04:48.570 "bdev_lvol_get_lvstores", 00:04:48.570 "bdev_lvol_delete", 00:04:48.570 "bdev_lvol_set_read_only", 00:04:48.570 "bdev_lvol_resize", 00:04:48.570 "bdev_lvol_decouple_parent", 00:04:48.570 "bdev_lvol_inflate", 00:04:48.570 "bdev_lvol_rename", 00:04:48.570 "bdev_lvol_clone_bdev", 00:04:48.570 "bdev_lvol_clone", 00:04:48.570 "bdev_lvol_snapshot", 00:04:48.570 "bdev_lvol_create", 00:04:48.570 "bdev_lvol_delete_lvstore", 00:04:48.570 "bdev_lvol_rename_lvstore", 00:04:48.570 "bdev_lvol_create_lvstore", 00:04:48.570 "bdev_raid_set_options", 00:04:48.570 "bdev_raid_remove_base_bdev", 00:04:48.570 "bdev_raid_add_base_bdev", 00:04:48.570 "bdev_raid_delete", 00:04:48.570 "bdev_raid_create", 00:04:48.570 "bdev_raid_get_bdevs", 00:04:48.570 "bdev_error_inject_error", 00:04:48.570 "bdev_error_delete", 00:04:48.570 "bdev_error_create", 00:04:48.570 "bdev_split_delete", 00:04:48.570 "bdev_split_create", 00:04:48.570 "bdev_delay_delete", 00:04:48.570 "bdev_delay_create", 00:04:48.570 "bdev_delay_update_latency", 00:04:48.570 "bdev_zone_block_delete", 00:04:48.570 "bdev_zone_block_create", 00:04:48.570 "blobfs_create", 00:04:48.570 "blobfs_detect", 00:04:48.570 "blobfs_set_cache_size", 00:04:48.570 "bdev_aio_delete", 00:04:48.570 "bdev_aio_rescan", 00:04:48.570 "bdev_aio_create", 00:04:48.570 "bdev_ftl_set_property", 00:04:48.570 "bdev_ftl_get_properties", 00:04:48.570 "bdev_ftl_get_stats", 00:04:48.570 "bdev_ftl_unmap", 00:04:48.570 "bdev_ftl_unload", 00:04:48.570 "bdev_ftl_delete", 00:04:48.570 "bdev_ftl_load", 00:04:48.570 "bdev_ftl_create", 00:04:48.570 "bdev_virtio_attach_controller", 00:04:48.570 "bdev_virtio_scsi_get_devices", 00:04:48.570 "bdev_virtio_detach_controller", 00:04:48.570 "bdev_virtio_blk_set_hotplug", 00:04:48.570 "bdev_iscsi_delete", 00:04:48.570 "bdev_iscsi_create", 00:04:48.570 "bdev_iscsi_set_options", 00:04:48.570 "bdev_rbd_get_clusters_info", 00:04:48.570 "bdev_rbd_unregister_cluster", 00:04:48.570 "bdev_rbd_register_cluster", 00:04:48.570 "bdev_rbd_resize", 00:04:48.570 "bdev_rbd_delete", 00:04:48.570 "bdev_rbd_create", 00:04:48.570 "accel_error_inject_error", 00:04:48.570 "ioat_scan_accel_module", 00:04:48.570 "dsa_scan_accel_module", 00:04:48.570 "iaa_scan_accel_module", 00:04:48.570 "keyring_file_remove_key", 00:04:48.570 "keyring_file_add_key", 00:04:48.570 "keyring_linux_set_options", 00:04:48.570 "iscsi_get_histogram", 00:04:48.570 "iscsi_enable_histogram", 00:04:48.570 "iscsi_set_options", 00:04:48.570 "iscsi_get_auth_groups", 00:04:48.570 "iscsi_auth_group_remove_secret", 00:04:48.570 "iscsi_auth_group_add_secret", 00:04:48.570 "iscsi_delete_auth_group", 00:04:48.570 "iscsi_create_auth_group", 00:04:48.570 "iscsi_set_discovery_auth", 00:04:48.570 "iscsi_get_options", 00:04:48.570 "iscsi_target_node_request_logout", 00:04:48.570 "iscsi_target_node_set_redirect", 00:04:48.570 "iscsi_target_node_set_auth", 00:04:48.570 "iscsi_target_node_add_lun", 00:04:48.570 "iscsi_get_stats", 00:04:48.570 "iscsi_get_connections", 00:04:48.570 "iscsi_portal_group_set_auth", 00:04:48.570 "iscsi_start_portal_group", 00:04:48.570 "iscsi_delete_portal_group", 00:04:48.570 "iscsi_create_portal_group", 00:04:48.570 "iscsi_get_portal_groups", 00:04:48.570 "iscsi_delete_target_node", 00:04:48.570 "iscsi_target_node_remove_pg_ig_maps", 00:04:48.570 "iscsi_target_node_add_pg_ig_maps", 00:04:48.570 "iscsi_create_target_node", 00:04:48.570 "iscsi_get_target_nodes", 00:04:48.570 "iscsi_delete_initiator_group", 00:04:48.570 "iscsi_initiator_group_remove_initiators", 00:04:48.570 "iscsi_initiator_group_add_initiators", 00:04:48.570 "iscsi_create_initiator_group", 00:04:48.570 "iscsi_get_initiator_groups", 00:04:48.570 "nvmf_set_crdt", 00:04:48.570 "nvmf_set_config", 00:04:48.570 "nvmf_set_max_subsystems", 00:04:48.570 "nvmf_stop_mdns_prr", 00:04:48.570 "nvmf_publish_mdns_prr", 00:04:48.570 "nvmf_subsystem_get_listeners", 00:04:48.570 "nvmf_subsystem_get_qpairs", 00:04:48.570 "nvmf_subsystem_get_controllers", 00:04:48.570 "nvmf_get_stats", 00:04:48.570 "nvmf_get_transports", 00:04:48.570 "nvmf_create_transport", 00:04:48.570 "nvmf_get_targets", 00:04:48.570 "nvmf_delete_target", 00:04:48.570 "nvmf_create_target", 00:04:48.570 "nvmf_subsystem_allow_any_host", 00:04:48.570 "nvmf_subsystem_remove_host", 00:04:48.570 "nvmf_subsystem_add_host", 00:04:48.570 "nvmf_ns_remove_host", 00:04:48.570 "nvmf_ns_add_host", 00:04:48.570 "nvmf_subsystem_remove_ns", 00:04:48.570 "nvmf_subsystem_add_ns", 00:04:48.570 "nvmf_subsystem_listener_set_ana_state", 00:04:48.570 "nvmf_discovery_get_referrals", 00:04:48.570 "nvmf_discovery_remove_referral", 00:04:48.570 "nvmf_discovery_add_referral", 00:04:48.570 "nvmf_subsystem_remove_listener", 00:04:48.570 "nvmf_subsystem_add_listener", 00:04:48.570 "nvmf_delete_subsystem", 00:04:48.570 "nvmf_create_subsystem", 00:04:48.570 "nvmf_get_subsystems", 00:04:48.570 "env_dpdk_get_mem_stats", 00:04:48.570 "nbd_get_disks", 00:04:48.570 "nbd_stop_disk", 00:04:48.570 "nbd_start_disk", 00:04:48.570 "ublk_recover_disk", 00:04:48.570 "ublk_get_disks", 00:04:48.570 "ublk_stop_disk", 00:04:48.570 "ublk_start_disk", 00:04:48.570 "ublk_destroy_target", 00:04:48.570 "ublk_create_target", 00:04:48.570 "virtio_blk_create_transport", 00:04:48.570 "virtio_blk_get_transports", 00:04:48.570 "vhost_controller_set_coalescing", 00:04:48.570 "vhost_get_controllers", 00:04:48.570 "vhost_delete_controller", 00:04:48.570 "vhost_create_blk_controller", 00:04:48.570 "vhost_scsi_controller_remove_target", 00:04:48.570 "vhost_scsi_controller_add_target", 00:04:48.570 "vhost_start_scsi_controller", 00:04:48.570 "vhost_create_scsi_controller", 00:04:48.570 "thread_set_cpumask", 00:04:48.570 "framework_get_governor", 00:04:48.570 "framework_get_scheduler", 00:04:48.570 "framework_set_scheduler", 00:04:48.570 "framework_get_reactors", 00:04:48.570 "thread_get_io_channels", 00:04:48.570 "thread_get_pollers", 00:04:48.570 "thread_get_stats", 00:04:48.570 "framework_monitor_context_switch", 00:04:48.570 "spdk_kill_instance", 00:04:48.570 "log_enable_timestamps", 00:04:48.570 "log_get_flags", 00:04:48.570 "log_clear_flag", 00:04:48.570 "log_set_flag", 00:04:48.570 "log_get_level", 00:04:48.570 "log_set_level", 00:04:48.570 "log_get_print_level", 00:04:48.570 "log_set_print_level", 00:04:48.570 "framework_enable_cpumask_locks", 00:04:48.570 "framework_disable_cpumask_locks", 00:04:48.570 "framework_wait_init", 00:04:48.570 "framework_start_init", 00:04:48.570 "scsi_get_devices", 00:04:48.570 "bdev_get_histogram", 00:04:48.570 "bdev_enable_histogram", 00:04:48.570 "bdev_set_qos_limit", 00:04:48.570 "bdev_set_qd_sampling_period", 00:04:48.570 "bdev_get_bdevs", 00:04:48.570 "bdev_reset_iostat", 00:04:48.570 "bdev_get_iostat", 00:04:48.570 "bdev_examine", 00:04:48.570 "bdev_wait_for_examine", 00:04:48.570 "bdev_set_options", 00:04:48.570 "notify_get_notifications", 00:04:48.570 "notify_get_types", 00:04:48.570 "accel_get_stats", 00:04:48.570 "accel_set_options", 00:04:48.570 "accel_set_driver", 00:04:48.570 "accel_crypto_key_destroy", 00:04:48.570 "accel_crypto_keys_get", 00:04:48.570 "accel_crypto_key_create", 00:04:48.570 "accel_assign_opc", 00:04:48.570 "accel_get_module_info", 00:04:48.570 "accel_get_opc_assignments", 00:04:48.570 "vmd_rescan", 00:04:48.570 "vmd_remove_device", 00:04:48.570 "vmd_enable", 00:04:48.570 "sock_get_default_impl", 00:04:48.570 "sock_set_default_impl", 00:04:48.570 "sock_impl_set_options", 00:04:48.570 "sock_impl_get_options", 00:04:48.570 "iobuf_get_stats", 00:04:48.570 "iobuf_set_options", 00:04:48.570 "framework_get_pci_devices", 00:04:48.570 "framework_get_config", 00:04:48.570 "framework_get_subsystems", 00:04:48.570 "trace_get_info", 00:04:48.570 "trace_get_tpoint_group_mask", 00:04:48.570 "trace_disable_tpoint_group", 00:04:48.570 "trace_enable_tpoint_group", 00:04:48.570 "trace_clear_tpoint_mask", 00:04:48.570 "trace_set_tpoint_mask", 00:04:48.570 "keyring_get_keys", 00:04:48.570 "spdk_get_version", 00:04:48.570 "rpc_get_methods" 00:04:48.570 ] 00:04:48.570 01:59:57 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:48.571 01:59:57 spdkcli_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:48.571 01:59:57 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:48.571 01:59:57 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:48.571 01:59:57 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 60025 00:04:48.571 01:59:57 spdkcli_tcp -- common/autotest_common.sh@948 -- # '[' -z 60025 ']' 00:04:48.571 01:59:57 spdkcli_tcp -- common/autotest_common.sh@952 -- # kill -0 60025 00:04:48.571 01:59:57 spdkcli_tcp -- common/autotest_common.sh@953 -- # uname 00:04:48.571 01:59:57 spdkcli_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:48.571 01:59:57 spdkcli_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60025 00:04:48.571 01:59:57 spdkcli_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:48.571 01:59:57 spdkcli_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:48.571 killing process with pid 60025 00:04:48.571 01:59:57 spdkcli_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60025' 00:04:48.571 01:59:57 spdkcli_tcp -- common/autotest_common.sh@967 -- # kill 60025 00:04:48.571 01:59:57 spdkcli_tcp -- common/autotest_common.sh@972 -- # wait 60025 00:04:50.474 00:04:50.474 real 0m3.638s 00:04:50.474 user 0m6.211s 00:04:50.474 sys 0m0.687s 00:04:50.474 01:59:59 spdkcli_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:50.474 01:59:59 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:50.474 ************************************ 00:04:50.474 END TEST spdkcli_tcp 00:04:50.474 ************************************ 00:04:50.474 01:59:59 -- common/autotest_common.sh@1142 -- # return 0 00:04:50.474 01:59:59 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:50.474 01:59:59 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:50.474 01:59:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:50.474 01:59:59 -- common/autotest_common.sh@10 -- # set +x 00:04:50.474 ************************************ 00:04:50.474 START TEST dpdk_mem_utility 00:04:50.474 ************************************ 00:04:50.474 01:59:59 dpdk_mem_utility -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:50.734 * Looking for test storage... 00:04:50.734 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:04:50.734 01:59:59 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:50.734 01:59:59 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=60133 00:04:50.734 01:59:59 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:50.734 01:59:59 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 60133 00:04:50.734 01:59:59 dpdk_mem_utility -- common/autotest_common.sh@829 -- # '[' -z 60133 ']' 00:04:50.734 01:59:59 dpdk_mem_utility -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:50.734 01:59:59 dpdk_mem_utility -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:50.734 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:50.734 01:59:59 dpdk_mem_utility -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:50.734 01:59:59 dpdk_mem_utility -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:50.734 01:59:59 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:50.734 [2024-07-23 01:59:59.447021] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:04:50.734 [2024-07-23 01:59:59.447223] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60133 ] 00:04:50.997 [2024-07-23 01:59:59.616940] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:51.281 [2024-07-23 01:59:59.831400] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:51.858 02:00:00 dpdk_mem_utility -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:51.858 02:00:00 dpdk_mem_utility -- common/autotest_common.sh@862 -- # return 0 00:04:51.858 02:00:00 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:51.858 02:00:00 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:51.858 02:00:00 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:51.858 02:00:00 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:51.858 { 00:04:51.858 "filename": "/tmp/spdk_mem_dump.txt" 00:04:51.858 } 00:04:51.858 02:00:00 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:51.858 02:00:00 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:52.118 DPDK memory size 820.000000 MiB in 1 heap(s) 00:04:52.118 1 heaps totaling size 820.000000 MiB 00:04:52.118 size: 820.000000 MiB heap id: 0 00:04:52.118 end heaps---------- 00:04:52.118 8 mempools totaling size 598.116089 MiB 00:04:52.118 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:52.118 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:52.118 size: 84.521057 MiB name: bdev_io_60133 00:04:52.118 size: 51.011292 MiB name: evtpool_60133 00:04:52.118 size: 50.003479 MiB name: msgpool_60133 00:04:52.118 size: 21.763794 MiB name: PDU_Pool 00:04:52.118 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:52.118 size: 0.026123 MiB name: Session_Pool 00:04:52.118 end mempools------- 00:04:52.118 6 memzones totaling size 4.142822 MiB 00:04:52.118 size: 1.000366 MiB name: RG_ring_0_60133 00:04:52.118 size: 1.000366 MiB name: RG_ring_1_60133 00:04:52.118 size: 1.000366 MiB name: RG_ring_4_60133 00:04:52.118 size: 1.000366 MiB name: RG_ring_5_60133 00:04:52.118 size: 0.125366 MiB name: RG_ring_2_60133 00:04:52.118 size: 0.015991 MiB name: RG_ring_3_60133 00:04:52.118 end memzones------- 00:04:52.118 02:00:00 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:04:52.118 heap id: 0 total size: 820.000000 MiB number of busy elements: 298 number of free elements: 18 00:04:52.118 list of free elements. size: 18.452026 MiB 00:04:52.118 element at address: 0x200000400000 with size: 1.999451 MiB 00:04:52.118 element at address: 0x200000800000 with size: 1.996887 MiB 00:04:52.118 element at address: 0x200007000000 with size: 1.995972 MiB 00:04:52.118 element at address: 0x20000b200000 with size: 1.995972 MiB 00:04:52.118 element at address: 0x200019100040 with size: 0.999939 MiB 00:04:52.118 element at address: 0x200019500040 with size: 0.999939 MiB 00:04:52.118 element at address: 0x200019600000 with size: 0.999084 MiB 00:04:52.118 element at address: 0x200003e00000 with size: 0.996094 MiB 00:04:52.118 element at address: 0x200032200000 with size: 0.994324 MiB 00:04:52.118 element at address: 0x200018e00000 with size: 0.959656 MiB 00:04:52.118 element at address: 0x200019900040 with size: 0.936401 MiB 00:04:52.118 element at address: 0x200000200000 with size: 0.830200 MiB 00:04:52.118 element at address: 0x20001b000000 with size: 0.564636 MiB 00:04:52.119 element at address: 0x200019200000 with size: 0.487976 MiB 00:04:52.119 element at address: 0x200019a00000 with size: 0.485413 MiB 00:04:52.119 element at address: 0x200013800000 with size: 0.467651 MiB 00:04:52.119 element at address: 0x200028400000 with size: 0.390442 MiB 00:04:52.119 element at address: 0x200003a00000 with size: 0.351990 MiB 00:04:52.119 list of standard malloc elements. size: 199.283569 MiB 00:04:52.119 element at address: 0x20000b3fef80 with size: 132.000183 MiB 00:04:52.119 element at address: 0x2000071fef80 with size: 64.000183 MiB 00:04:52.119 element at address: 0x200018ffff80 with size: 1.000183 MiB 00:04:52.119 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:04:52.119 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:04:52.119 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:04:52.119 element at address: 0x2000199eff40 with size: 0.062683 MiB 00:04:52.119 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:04:52.119 element at address: 0x20000b1ff040 with size: 0.000427 MiB 00:04:52.119 element at address: 0x2000199efdc0 with size: 0.000366 MiB 00:04:52.119 element at address: 0x2000137ff040 with size: 0.000305 MiB 00:04:52.119 element at address: 0x2000002d4880 with size: 0.000244 MiB 00:04:52.119 element at address: 0x2000002d4980 with size: 0.000244 MiB 00:04:52.119 element at address: 0x2000002d4a80 with size: 0.000244 MiB 00:04:52.119 element at address: 0x2000002d4b80 with size: 0.000244 MiB 00:04:52.119 element at address: 0x2000002d4c80 with size: 0.000244 MiB 00:04:52.119 element at address: 0x2000002d4d80 with size: 0.000244 MiB 00:04:52.119 element at address: 0x2000002d4e80 with size: 0.000244 MiB 00:04:52.119 element at address: 0x2000002d4f80 with size: 0.000244 MiB 00:04:52.119 element at address: 0x2000002d5080 with size: 0.000244 MiB 00:04:52.119 element at address: 0x2000002d5180 with size: 0.000244 MiB 00:04:52.119 element at address: 0x2000002d5280 with size: 0.000244 MiB 00:04:52.119 element at address: 0x2000002d5380 with size: 0.000244 MiB 00:04:52.119 element at address: 0x2000002d5480 with size: 0.000244 MiB 00:04:52.119 element at address: 0x2000002d5580 with size: 0.000244 MiB 00:04:52.119 element at address: 0x2000002d5680 with size: 0.000244 MiB 00:04:52.119 element at address: 0x2000002d5780 with size: 0.000244 MiB 00:04:52.119 element at address: 0x2000002d5880 with size: 0.000244 MiB 00:04:52.119 element at address: 0x2000002d5980 with size: 0.000244 MiB 00:04:52.119 element at address: 0x2000002d5a80 with size: 0.000244 MiB 00:04:52.119 element at address: 0x2000002d5b80 with size: 0.000244 MiB 00:04:52.119 element at address: 0x2000002d5c80 with size: 0.000244 MiB 00:04:52.119 element at address: 0x2000002d5d80 with size: 0.000244 MiB 00:04:52.119 element at address: 0x2000002d5e80 with size: 0.000244 MiB 00:04:52.119 element at address: 0x2000002d6100 with size: 0.000244 MiB 00:04:52.119 element at address: 0x2000002d6200 with size: 0.000244 MiB 00:04:52.119 element at address: 0x2000002d6300 with size: 0.000244 MiB 00:04:52.119 element at address: 0x2000002d6400 with size: 0.000244 MiB 00:04:52.119 element at address: 0x2000002d6500 with size: 0.000244 MiB 00:04:52.119 element at address: 0x2000002d6600 with size: 0.000244 MiB 00:04:52.119 element at address: 0x2000002d6700 with size: 0.000244 MiB 00:04:52.119 element at address: 0x2000002d6800 with size: 0.000244 MiB 00:04:52.119 element at address: 0x2000002d6900 with size: 0.000244 MiB 00:04:52.119 element at address: 0x2000002d6a00 with size: 0.000244 MiB 00:04:52.119 element at address: 0x2000002d6b00 with size: 0.000244 MiB 00:04:52.119 element at address: 0x2000002d6c00 with size: 0.000244 MiB 00:04:52.119 element at address: 0x2000002d6d00 with size: 0.000244 MiB 00:04:52.119 element at address: 0x2000002d6e00 with size: 0.000244 MiB 00:04:52.119 element at address: 0x2000002d6f00 with size: 0.000244 MiB 00:04:52.119 element at address: 0x2000002d7000 with size: 0.000244 MiB 00:04:52.119 element at address: 0x2000002d7100 with size: 0.000244 MiB 00:04:52.119 element at address: 0x2000002d7200 with size: 0.000244 MiB 00:04:52.119 element at address: 0x2000002d7300 with size: 0.000244 MiB 00:04:52.119 element at address: 0x2000002d7400 with size: 0.000244 MiB 00:04:52.119 element at address: 0x2000002d7500 with size: 0.000244 MiB 00:04:52.119 element at address: 0x2000002d7600 with size: 0.000244 MiB 00:04:52.119 element at address: 0x2000002d7700 with size: 0.000244 MiB 00:04:52.119 element at address: 0x2000002d7800 with size: 0.000244 MiB 00:04:52.119 element at address: 0x2000002d7900 with size: 0.000244 MiB 00:04:52.119 element at address: 0x2000002d7a00 with size: 0.000244 MiB 00:04:52.119 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:04:52.119 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:04:52.119 element at address: 0x200003a5a1c0 with size: 0.000244 MiB 00:04:52.119 element at address: 0x200003a5a2c0 with size: 0.000244 MiB 00:04:52.119 element at address: 0x200003a5a3c0 with size: 0.000244 MiB 00:04:52.119 element at address: 0x200003a5a4c0 with size: 0.000244 MiB 00:04:52.119 element at address: 0x200003a5a5c0 with size: 0.000244 MiB 00:04:52.119 element at address: 0x200003a5a6c0 with size: 0.000244 MiB 00:04:52.119 element at address: 0x200003a5a7c0 with size: 0.000244 MiB 00:04:52.119 element at address: 0x200003a5a8c0 with size: 0.000244 MiB 00:04:52.119 element at address: 0x200003a5a9c0 with size: 0.000244 MiB 00:04:52.119 element at address: 0x200003a5aac0 with size: 0.000244 MiB 00:04:52.119 element at address: 0x200003a5abc0 with size: 0.000244 MiB 00:04:52.119 element at address: 0x200003a5acc0 with size: 0.000244 MiB 00:04:52.119 element at address: 0x200003a5adc0 with size: 0.000244 MiB 00:04:52.119 element at address: 0x200003a5aec0 with size: 0.000244 MiB 00:04:52.119 element at address: 0x200003a5afc0 with size: 0.000244 MiB 00:04:52.119 element at address: 0x200003a5b0c0 with size: 0.000244 MiB 00:04:52.119 element at address: 0x200003a5b1c0 with size: 0.000244 MiB 00:04:52.119 element at address: 0x200003aff980 with size: 0.000244 MiB 00:04:52.119 element at address: 0x200003affa80 with size: 0.000244 MiB 00:04:52.119 element at address: 0x200003eff000 with size: 0.000244 MiB 00:04:52.119 element at address: 0x20000b1ff200 with size: 0.000244 MiB 00:04:52.119 element at address: 0x20000b1ff300 with size: 0.000244 MiB 00:04:52.119 element at address: 0x20000b1ff400 with size: 0.000244 MiB 00:04:52.119 element at address: 0x20000b1ff500 with size: 0.000244 MiB 00:04:52.119 element at address: 0x20000b1ff600 with size: 0.000244 MiB 00:04:52.119 element at address: 0x20000b1ff700 with size: 0.000244 MiB 00:04:52.119 element at address: 0x20000b1ff800 with size: 0.000244 MiB 00:04:52.119 element at address: 0x20000b1ff900 with size: 0.000244 MiB 00:04:52.119 element at address: 0x20000b1ffa00 with size: 0.000244 MiB 00:04:52.119 element at address: 0x20000b1ffb00 with size: 0.000244 MiB 00:04:52.119 element at address: 0x20000b1ffc00 with size: 0.000244 MiB 00:04:52.119 element at address: 0x20000b1ffd00 with size: 0.000244 MiB 00:04:52.119 element at address: 0x20000b1ffe00 with size: 0.000244 MiB 00:04:52.119 element at address: 0x20000b1fff00 with size: 0.000244 MiB 00:04:52.119 element at address: 0x2000137ff180 with size: 0.000244 MiB 00:04:52.119 element at address: 0x2000137ff280 with size: 0.000244 MiB 00:04:52.119 element at address: 0x2000137ff380 with size: 0.000244 MiB 00:04:52.119 element at address: 0x2000137ff480 with size: 0.000244 MiB 00:04:52.119 element at address: 0x2000137ff580 with size: 0.000244 MiB 00:04:52.119 element at address: 0x2000137ff680 with size: 0.000244 MiB 00:04:52.119 element at address: 0x2000137ff780 with size: 0.000244 MiB 00:04:52.119 element at address: 0x2000137ff880 with size: 0.000244 MiB 00:04:52.119 element at address: 0x2000137ff980 with size: 0.000244 MiB 00:04:52.119 element at address: 0x2000137ffa80 with size: 0.000244 MiB 00:04:52.119 element at address: 0x2000137ffb80 with size: 0.000244 MiB 00:04:52.119 element at address: 0x2000137ffc80 with size: 0.000244 MiB 00:04:52.119 element at address: 0x2000137fff00 with size: 0.000244 MiB 00:04:52.119 element at address: 0x200013877b80 with size: 0.000244 MiB 00:04:52.119 element at address: 0x200013877c80 with size: 0.000244 MiB 00:04:52.119 element at address: 0x200013877d80 with size: 0.000244 MiB 00:04:52.119 element at address: 0x200013877e80 with size: 0.000244 MiB 00:04:52.119 element at address: 0x200013877f80 with size: 0.000244 MiB 00:04:52.119 element at address: 0x200013878080 with size: 0.000244 MiB 00:04:52.119 element at address: 0x200013878180 with size: 0.000244 MiB 00:04:52.119 element at address: 0x200013878280 with size: 0.000244 MiB 00:04:52.119 element at address: 0x200013878380 with size: 0.000244 MiB 00:04:52.119 element at address: 0x200013878480 with size: 0.000244 MiB 00:04:52.119 element at address: 0x200013878580 with size: 0.000244 MiB 00:04:52.119 element at address: 0x2000138f88c0 with size: 0.000244 MiB 00:04:52.119 element at address: 0x200018efdd00 with size: 0.000244 MiB 00:04:52.119 element at address: 0x20001927cec0 with size: 0.000244 MiB 00:04:52.119 element at address: 0x20001927cfc0 with size: 0.000244 MiB 00:04:52.119 element at address: 0x20001927d0c0 with size: 0.000244 MiB 00:04:52.119 element at address: 0x20001927d1c0 with size: 0.000244 MiB 00:04:52.120 element at address: 0x20001927d2c0 with size: 0.000244 MiB 00:04:52.120 element at address: 0x20001927d3c0 with size: 0.000244 MiB 00:04:52.120 element at address: 0x20001927d4c0 with size: 0.000244 MiB 00:04:52.120 element at address: 0x20001927d5c0 with size: 0.000244 MiB 00:04:52.120 element at address: 0x20001927d6c0 with size: 0.000244 MiB 00:04:52.120 element at address: 0x20001927d7c0 with size: 0.000244 MiB 00:04:52.120 element at address: 0x20001927d8c0 with size: 0.000244 MiB 00:04:52.120 element at address: 0x20001927d9c0 with size: 0.000244 MiB 00:04:52.120 element at address: 0x2000192fdd00 with size: 0.000244 MiB 00:04:52.120 element at address: 0x2000196ffc40 with size: 0.000244 MiB 00:04:52.120 element at address: 0x2000199efbc0 with size: 0.000244 MiB 00:04:52.120 element at address: 0x2000199efcc0 with size: 0.000244 MiB 00:04:52.120 element at address: 0x200019abc680 with size: 0.000244 MiB 00:04:52.120 element at address: 0x20001b0908c0 with size: 0.000244 MiB 00:04:52.120 element at address: 0x20001b0909c0 with size: 0.000244 MiB 00:04:52.120 element at address: 0x20001b090ac0 with size: 0.000244 MiB 00:04:52.120 element at address: 0x20001b090bc0 with size: 0.000244 MiB 00:04:52.120 element at address: 0x20001b090cc0 with size: 0.000244 MiB 00:04:52.120 element at address: 0x20001b090dc0 with size: 0.000244 MiB 00:04:52.120 element at address: 0x20001b090ec0 with size: 0.000244 MiB 00:04:52.120 element at address: 0x20001b090fc0 with size: 0.000244 MiB 00:04:52.120 element at address: 0x20001b0910c0 with size: 0.000244 MiB 00:04:52.120 element at address: 0x20001b0911c0 with size: 0.000244 MiB 00:04:52.120 element at address: 0x20001b0912c0 with size: 0.000244 MiB 00:04:52.120 element at address: 0x20001b0913c0 with size: 0.000244 MiB 00:04:52.120 element at address: 0x20001b0914c0 with size: 0.000244 MiB 00:04:52.120 element at address: 0x20001b0915c0 with size: 0.000244 MiB 00:04:52.120 element at address: 0x20001b0916c0 with size: 0.000244 MiB 00:04:52.120 element at address: 0x20001b0917c0 with size: 0.000244 MiB 00:04:52.120 element at address: 0x20001b0918c0 with size: 0.000244 MiB 00:04:52.120 element at address: 0x20001b0919c0 with size: 0.000244 MiB 00:04:52.120 element at address: 0x20001b091ac0 with size: 0.000244 MiB 00:04:52.120 element at address: 0x20001b091bc0 with size: 0.000244 MiB 00:04:52.120 element at address: 0x20001b091cc0 with size: 0.000244 MiB 00:04:52.120 element at address: 0x20001b091dc0 with size: 0.000244 MiB 00:04:52.120 element at address: 0x20001b091ec0 with size: 0.000244 MiB 00:04:52.120 element at address: 0x20001b091fc0 with size: 0.000244 MiB 00:04:52.120 element at address: 0x20001b0920c0 with size: 0.000244 MiB 00:04:52.120 element at address: 0x20001b0921c0 with size: 0.000244 MiB 00:04:52.120 element at address: 0x20001b0922c0 with size: 0.000244 MiB 00:04:52.120 element at address: 0x20001b0923c0 with size: 0.000244 MiB 00:04:52.120 element at address: 0x20001b0924c0 with size: 0.000244 MiB 00:04:52.120 element at address: 0x20001b0925c0 with size: 0.000244 MiB 00:04:52.120 element at address: 0x20001b0926c0 with size: 0.000244 MiB 00:04:52.120 element at address: 0x20001b0927c0 with size: 0.000244 MiB 00:04:52.120 element at address: 0x20001b0928c0 with size: 0.000244 MiB 00:04:52.120 element at address: 0x20001b0929c0 with size: 0.000244 MiB 00:04:52.120 element at address: 0x20001b092ac0 with size: 0.000244 MiB 00:04:52.120 element at address: 0x20001b092bc0 with size: 0.000244 MiB 00:04:52.120 element at address: 0x20001b092cc0 with size: 0.000244 MiB 00:04:52.120 element at address: 0x20001b092dc0 with size: 0.000244 MiB 00:04:52.120 element at address: 0x20001b092ec0 with size: 0.000244 MiB 00:04:52.120 element at address: 0x20001b092fc0 with size: 0.000244 MiB 00:04:52.120 element at address: 0x20001b0930c0 with size: 0.000244 MiB 00:04:52.120 element at address: 0x20001b0931c0 with size: 0.000244 MiB 00:04:52.120 element at address: 0x20001b0932c0 with size: 0.000244 MiB 00:04:52.120 element at address: 0x20001b0933c0 with size: 0.000244 MiB 00:04:52.120 element at address: 0x20001b0934c0 with size: 0.000244 MiB 00:04:52.120 element at address: 0x20001b0935c0 with size: 0.000244 MiB 00:04:52.120 element at address: 0x20001b0936c0 with size: 0.000244 MiB 00:04:52.120 element at address: 0x20001b0937c0 with size: 0.000244 MiB 00:04:52.120 element at address: 0x20001b0938c0 with size: 0.000244 MiB 00:04:52.120 element at address: 0x20001b0939c0 with size: 0.000244 MiB 00:04:52.120 element at address: 0x20001b093ac0 with size: 0.000244 MiB 00:04:52.120 element at address: 0x20001b093bc0 with size: 0.000244 MiB 00:04:52.120 element at address: 0x20001b093cc0 with size: 0.000244 MiB 00:04:52.120 element at address: 0x20001b093dc0 with size: 0.000244 MiB 00:04:52.120 element at address: 0x20001b093ec0 with size: 0.000244 MiB 00:04:52.120 element at address: 0x20001b093fc0 with size: 0.000244 MiB 00:04:52.120 element at address: 0x20001b0940c0 with size: 0.000244 MiB 00:04:52.120 element at address: 0x20001b0941c0 with size: 0.000244 MiB 00:04:52.120 element at address: 0x20001b0942c0 with size: 0.000244 MiB 00:04:52.120 element at address: 0x20001b0943c0 with size: 0.000244 MiB 00:04:52.120 element at address: 0x20001b0944c0 with size: 0.000244 MiB 00:04:52.120 element at address: 0x20001b0945c0 with size: 0.000244 MiB 00:04:52.120 element at address: 0x20001b0946c0 with size: 0.000244 MiB 00:04:52.120 element at address: 0x20001b0947c0 with size: 0.000244 MiB 00:04:52.120 element at address: 0x20001b0948c0 with size: 0.000244 MiB 00:04:52.120 element at address: 0x20001b0949c0 with size: 0.000244 MiB 00:04:52.120 element at address: 0x20001b094ac0 with size: 0.000244 MiB 00:04:52.120 element at address: 0x20001b094bc0 with size: 0.000244 MiB 00:04:52.120 element at address: 0x20001b094cc0 with size: 0.000244 MiB 00:04:52.120 element at address: 0x20001b094dc0 with size: 0.000244 MiB 00:04:52.120 element at address: 0x20001b094ec0 with size: 0.000244 MiB 00:04:52.120 element at address: 0x20001b094fc0 with size: 0.000244 MiB 00:04:52.120 element at address: 0x20001b0950c0 with size: 0.000244 MiB 00:04:52.120 element at address: 0x20001b0951c0 with size: 0.000244 MiB 00:04:52.120 element at address: 0x20001b0952c0 with size: 0.000244 MiB 00:04:52.120 element at address: 0x20001b0953c0 with size: 0.000244 MiB 00:04:52.120 element at address: 0x200028463f40 with size: 0.000244 MiB 00:04:52.120 element at address: 0x200028464040 with size: 0.000244 MiB 00:04:52.120 element at address: 0x20002846ad00 with size: 0.000244 MiB 00:04:52.120 element at address: 0x20002846af80 with size: 0.000244 MiB 00:04:52.120 element at address: 0x20002846b080 with size: 0.000244 MiB 00:04:52.120 element at address: 0x20002846b180 with size: 0.000244 MiB 00:04:52.120 element at address: 0x20002846b280 with size: 0.000244 MiB 00:04:52.120 element at address: 0x20002846b380 with size: 0.000244 MiB 00:04:52.120 element at address: 0x20002846b480 with size: 0.000244 MiB 00:04:52.120 element at address: 0x20002846b580 with size: 0.000244 MiB 00:04:52.120 element at address: 0x20002846b680 with size: 0.000244 MiB 00:04:52.120 element at address: 0x20002846b780 with size: 0.000244 MiB 00:04:52.120 element at address: 0x20002846b880 with size: 0.000244 MiB 00:04:52.120 element at address: 0x20002846b980 with size: 0.000244 MiB 00:04:52.120 element at address: 0x20002846ba80 with size: 0.000244 MiB 00:04:52.120 element at address: 0x20002846bb80 with size: 0.000244 MiB 00:04:52.120 element at address: 0x20002846bc80 with size: 0.000244 MiB 00:04:52.120 element at address: 0x20002846bd80 with size: 0.000244 MiB 00:04:52.120 element at address: 0x20002846be80 with size: 0.000244 MiB 00:04:52.120 element at address: 0x20002846bf80 with size: 0.000244 MiB 00:04:52.120 element at address: 0x20002846c080 with size: 0.000244 MiB 00:04:52.120 element at address: 0x20002846c180 with size: 0.000244 MiB 00:04:52.120 element at address: 0x20002846c280 with size: 0.000244 MiB 00:04:52.120 element at address: 0x20002846c380 with size: 0.000244 MiB 00:04:52.120 element at address: 0x20002846c480 with size: 0.000244 MiB 00:04:52.120 element at address: 0x20002846c580 with size: 0.000244 MiB 00:04:52.120 element at address: 0x20002846c680 with size: 0.000244 MiB 00:04:52.120 element at address: 0x20002846c780 with size: 0.000244 MiB 00:04:52.120 element at address: 0x20002846c880 with size: 0.000244 MiB 00:04:52.120 element at address: 0x20002846c980 with size: 0.000244 MiB 00:04:52.120 element at address: 0x20002846ca80 with size: 0.000244 MiB 00:04:52.120 element at address: 0x20002846cb80 with size: 0.000244 MiB 00:04:52.120 element at address: 0x20002846cc80 with size: 0.000244 MiB 00:04:52.120 element at address: 0x20002846cd80 with size: 0.000244 MiB 00:04:52.120 element at address: 0x20002846ce80 with size: 0.000244 MiB 00:04:52.120 element at address: 0x20002846cf80 with size: 0.000244 MiB 00:04:52.120 element at address: 0x20002846d080 with size: 0.000244 MiB 00:04:52.120 element at address: 0x20002846d180 with size: 0.000244 MiB 00:04:52.120 element at address: 0x20002846d280 with size: 0.000244 MiB 00:04:52.120 element at address: 0x20002846d380 with size: 0.000244 MiB 00:04:52.120 element at address: 0x20002846d480 with size: 0.000244 MiB 00:04:52.120 element at address: 0x20002846d580 with size: 0.000244 MiB 00:04:52.120 element at address: 0x20002846d680 with size: 0.000244 MiB 00:04:52.120 element at address: 0x20002846d780 with size: 0.000244 MiB 00:04:52.121 element at address: 0x20002846d880 with size: 0.000244 MiB 00:04:52.121 element at address: 0x20002846d980 with size: 0.000244 MiB 00:04:52.121 element at address: 0x20002846da80 with size: 0.000244 MiB 00:04:52.121 element at address: 0x20002846db80 with size: 0.000244 MiB 00:04:52.121 element at address: 0x20002846dc80 with size: 0.000244 MiB 00:04:52.121 element at address: 0x20002846dd80 with size: 0.000244 MiB 00:04:52.121 element at address: 0x20002846de80 with size: 0.000244 MiB 00:04:52.121 element at address: 0x20002846df80 with size: 0.000244 MiB 00:04:52.121 element at address: 0x20002846e080 with size: 0.000244 MiB 00:04:52.121 element at address: 0x20002846e180 with size: 0.000244 MiB 00:04:52.121 element at address: 0x20002846e280 with size: 0.000244 MiB 00:04:52.121 element at address: 0x20002846e380 with size: 0.000244 MiB 00:04:52.121 element at address: 0x20002846e480 with size: 0.000244 MiB 00:04:52.121 element at address: 0x20002846e580 with size: 0.000244 MiB 00:04:52.121 element at address: 0x20002846e680 with size: 0.000244 MiB 00:04:52.121 element at address: 0x20002846e780 with size: 0.000244 MiB 00:04:52.121 element at address: 0x20002846e880 with size: 0.000244 MiB 00:04:52.121 element at address: 0x20002846e980 with size: 0.000244 MiB 00:04:52.121 element at address: 0x20002846ea80 with size: 0.000244 MiB 00:04:52.121 element at address: 0x20002846eb80 with size: 0.000244 MiB 00:04:52.121 element at address: 0x20002846ec80 with size: 0.000244 MiB 00:04:52.121 element at address: 0x20002846ed80 with size: 0.000244 MiB 00:04:52.121 element at address: 0x20002846ee80 with size: 0.000244 MiB 00:04:52.121 element at address: 0x20002846ef80 with size: 0.000244 MiB 00:04:52.121 element at address: 0x20002846f080 with size: 0.000244 MiB 00:04:52.121 element at address: 0x20002846f180 with size: 0.000244 MiB 00:04:52.121 element at address: 0x20002846f280 with size: 0.000244 MiB 00:04:52.121 element at address: 0x20002846f380 with size: 0.000244 MiB 00:04:52.121 element at address: 0x20002846f480 with size: 0.000244 MiB 00:04:52.121 element at address: 0x20002846f580 with size: 0.000244 MiB 00:04:52.121 element at address: 0x20002846f680 with size: 0.000244 MiB 00:04:52.121 element at address: 0x20002846f780 with size: 0.000244 MiB 00:04:52.121 element at address: 0x20002846f880 with size: 0.000244 MiB 00:04:52.121 element at address: 0x20002846f980 with size: 0.000244 MiB 00:04:52.121 element at address: 0x20002846fa80 with size: 0.000244 MiB 00:04:52.121 element at address: 0x20002846fb80 with size: 0.000244 MiB 00:04:52.121 element at address: 0x20002846fc80 with size: 0.000244 MiB 00:04:52.121 element at address: 0x20002846fd80 with size: 0.000244 MiB 00:04:52.121 element at address: 0x20002846fe80 with size: 0.000244 MiB 00:04:52.121 list of memzone associated elements. size: 602.264404 MiB 00:04:52.121 element at address: 0x20001b0954c0 with size: 211.416809 MiB 00:04:52.121 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:52.121 element at address: 0x20002846ff80 with size: 157.562622 MiB 00:04:52.121 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:52.121 element at address: 0x2000139fab40 with size: 84.020691 MiB 00:04:52.121 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_60133_0 00:04:52.121 element at address: 0x2000009ff340 with size: 48.003113 MiB 00:04:52.121 associated memzone info: size: 48.002930 MiB name: MP_evtpool_60133_0 00:04:52.121 element at address: 0x200003fff340 with size: 48.003113 MiB 00:04:52.121 associated memzone info: size: 48.002930 MiB name: MP_msgpool_60133_0 00:04:52.121 element at address: 0x200019bbe900 with size: 20.255615 MiB 00:04:52.121 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:52.121 element at address: 0x2000323feb00 with size: 18.005127 MiB 00:04:52.121 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:52.121 element at address: 0x2000005ffdc0 with size: 2.000549 MiB 00:04:52.121 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_60133 00:04:52.121 element at address: 0x200003bffdc0 with size: 2.000549 MiB 00:04:52.121 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_60133 00:04:52.121 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:04:52.121 associated memzone info: size: 1.007996 MiB name: MP_evtpool_60133 00:04:52.121 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:04:52.121 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:52.121 element at address: 0x200019abc780 with size: 1.008179 MiB 00:04:52.121 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:52.121 element at address: 0x200018efde00 with size: 1.008179 MiB 00:04:52.121 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:52.121 element at address: 0x2000138f89c0 with size: 1.008179 MiB 00:04:52.121 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:52.121 element at address: 0x200003eff100 with size: 1.000549 MiB 00:04:52.121 associated memzone info: size: 1.000366 MiB name: RG_ring_0_60133 00:04:52.121 element at address: 0x200003affb80 with size: 1.000549 MiB 00:04:52.121 associated memzone info: size: 1.000366 MiB name: RG_ring_1_60133 00:04:52.121 element at address: 0x2000196ffd40 with size: 1.000549 MiB 00:04:52.121 associated memzone info: size: 1.000366 MiB name: RG_ring_4_60133 00:04:52.121 element at address: 0x2000322fe8c0 with size: 1.000549 MiB 00:04:52.121 associated memzone info: size: 1.000366 MiB name: RG_ring_5_60133 00:04:52.121 element at address: 0x200003a5b2c0 with size: 0.500549 MiB 00:04:52.121 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_60133 00:04:52.121 element at address: 0x20001927dac0 with size: 0.500549 MiB 00:04:52.121 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:52.121 element at address: 0x200013878680 with size: 0.500549 MiB 00:04:52.121 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:52.121 element at address: 0x200019a7c440 with size: 0.250549 MiB 00:04:52.121 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:52.121 element at address: 0x200003adf740 with size: 0.125549 MiB 00:04:52.121 associated memzone info: size: 0.125366 MiB name: RG_ring_2_60133 00:04:52.121 element at address: 0x200018ef5ac0 with size: 0.031799 MiB 00:04:52.121 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:52.121 element at address: 0x200028464140 with size: 0.023804 MiB 00:04:52.121 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:52.121 element at address: 0x200003adb500 with size: 0.016174 MiB 00:04:52.121 associated memzone info: size: 0.015991 MiB name: RG_ring_3_60133 00:04:52.121 element at address: 0x20002846a2c0 with size: 0.002502 MiB 00:04:52.121 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:52.121 element at address: 0x2000002d5f80 with size: 0.000366 MiB 00:04:52.121 associated memzone info: size: 0.000183 MiB name: MP_msgpool_60133 00:04:52.121 element at address: 0x2000137ffd80 with size: 0.000366 MiB 00:04:52.121 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_60133 00:04:52.121 element at address: 0x20002846ae00 with size: 0.000366 MiB 00:04:52.121 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:52.121 02:00:00 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:52.121 02:00:00 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 60133 00:04:52.121 02:00:00 dpdk_mem_utility -- common/autotest_common.sh@948 -- # '[' -z 60133 ']' 00:04:52.121 02:00:00 dpdk_mem_utility -- common/autotest_common.sh@952 -- # kill -0 60133 00:04:52.121 02:00:00 dpdk_mem_utility -- common/autotest_common.sh@953 -- # uname 00:04:52.121 02:00:00 dpdk_mem_utility -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:52.121 02:00:00 dpdk_mem_utility -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60133 00:04:52.121 killing process with pid 60133 00:04:52.121 02:00:00 dpdk_mem_utility -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:52.121 02:00:00 dpdk_mem_utility -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:52.121 02:00:00 dpdk_mem_utility -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60133' 00:04:52.121 02:00:00 dpdk_mem_utility -- common/autotest_common.sh@967 -- # kill 60133 00:04:52.121 02:00:00 dpdk_mem_utility -- common/autotest_common.sh@972 -- # wait 60133 00:04:54.023 00:04:54.023 real 0m3.483s 00:04:54.023 user 0m3.362s 00:04:54.023 sys 0m0.667s 00:04:54.023 02:00:02 dpdk_mem_utility -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:54.023 02:00:02 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:54.023 ************************************ 00:04:54.023 END TEST dpdk_mem_utility 00:04:54.023 ************************************ 00:04:54.023 02:00:02 -- common/autotest_common.sh@1142 -- # return 0 00:04:54.023 02:00:02 -- spdk/autotest.sh@181 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:04:54.023 02:00:02 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:54.023 02:00:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:54.023 02:00:02 -- common/autotest_common.sh@10 -- # set +x 00:04:54.023 ************************************ 00:04:54.023 START TEST event 00:04:54.023 ************************************ 00:04:54.023 02:00:02 event -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:04:54.023 * Looking for test storage... 00:04:54.282 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:04:54.282 02:00:02 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:04:54.282 02:00:02 event -- bdev/nbd_common.sh@6 -- # set -e 00:04:54.282 02:00:02 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:54.282 02:00:02 event -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:04:54.282 02:00:02 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:54.282 02:00:02 event -- common/autotest_common.sh@10 -- # set +x 00:04:54.282 ************************************ 00:04:54.282 START TEST event_perf 00:04:54.282 ************************************ 00:04:54.282 02:00:02 event.event_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:54.282 Running I/O for 1 seconds...[2024-07-23 02:00:02.862214] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:04:54.282 [2024-07-23 02:00:02.862398] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60228 ] 00:04:54.282 [2024-07-23 02:00:03.035265] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:54.540 [2024-07-23 02:00:03.224132] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:54.540 [2024-07-23 02:00:03.224192] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:04:54.540 [2024-07-23 02:00:03.224355] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:54.540 [2024-07-23 02:00:03.224372] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:04:55.913 Running I/O for 1 seconds... 00:04:55.913 lcore 0: 186695 00:04:55.913 lcore 1: 186695 00:04:55.913 lcore 2: 186691 00:04:55.913 lcore 3: 186692 00:04:55.913 done. 00:04:55.913 00:04:55.913 real 0m1.739s 00:04:55.913 user 0m4.490s 00:04:55.913 sys 0m0.123s 00:04:55.913 02:00:04 event.event_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:55.913 02:00:04 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:04:55.913 ************************************ 00:04:55.913 END TEST event_perf 00:04:55.913 ************************************ 00:04:55.913 02:00:04 event -- common/autotest_common.sh@1142 -- # return 0 00:04:55.913 02:00:04 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:04:55.913 02:00:04 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:04:55.913 02:00:04 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:55.913 02:00:04 event -- common/autotest_common.sh@10 -- # set +x 00:04:55.913 ************************************ 00:04:55.913 START TEST event_reactor 00:04:55.913 ************************************ 00:04:55.913 02:00:04 event.event_reactor -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:04:55.913 [2024-07-23 02:00:04.652912] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:04:55.913 [2024-07-23 02:00:04.653077] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60267 ] 00:04:56.171 [2024-07-23 02:00:04.829738] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:56.432 [2024-07-23 02:00:05.062600] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:57.809 test_start 00:04:57.809 oneshot 00:04:57.809 tick 100 00:04:57.809 tick 100 00:04:57.809 tick 250 00:04:57.809 tick 100 00:04:57.809 tick 100 00:04:57.809 tick 100 00:04:57.809 tick 250 00:04:57.809 tick 500 00:04:57.809 tick 100 00:04:57.809 tick 100 00:04:57.809 tick 250 00:04:57.809 tick 100 00:04:57.809 tick 100 00:04:57.809 test_end 00:04:57.809 00:04:57.809 real 0m1.771s 00:04:57.809 user 0m1.533s 00:04:57.809 sys 0m0.128s 00:04:57.809 02:00:06 event.event_reactor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:57.809 02:00:06 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:04:57.809 ************************************ 00:04:57.809 END TEST event_reactor 00:04:57.809 ************************************ 00:04:57.809 02:00:06 event -- common/autotest_common.sh@1142 -- # return 0 00:04:57.809 02:00:06 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:57.809 02:00:06 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:04:57.809 02:00:06 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:57.809 02:00:06 event -- common/autotest_common.sh@10 -- # set +x 00:04:57.809 ************************************ 00:04:57.809 START TEST event_reactor_perf 00:04:57.809 ************************************ 00:04:57.809 02:00:06 event.event_reactor_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:57.809 [2024-07-23 02:00:06.468900] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:04:57.809 [2024-07-23 02:00:06.469041] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60304 ] 00:04:58.067 [2024-07-23 02:00:06.628576] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:58.067 [2024-07-23 02:00:06.817382] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:59.442 test_start 00:04:59.442 test_end 00:04:59.442 Performance: 364595 events per second 00:04:59.442 00:04:59.442 real 0m1.688s 00:04:59.442 user 0m1.481s 00:04:59.442 sys 0m0.099s 00:04:59.442 02:00:08 event.event_reactor_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:59.442 02:00:08 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:04:59.442 ************************************ 00:04:59.442 END TEST event_reactor_perf 00:04:59.442 ************************************ 00:04:59.442 02:00:08 event -- common/autotest_common.sh@1142 -- # return 0 00:04:59.442 02:00:08 event -- event/event.sh@49 -- # uname -s 00:04:59.442 02:00:08 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:04:59.442 02:00:08 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:04:59.443 02:00:08 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:59.443 02:00:08 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:59.443 02:00:08 event -- common/autotest_common.sh@10 -- # set +x 00:04:59.443 ************************************ 00:04:59.443 START TEST event_scheduler 00:04:59.443 ************************************ 00:04:59.443 02:00:08 event.event_scheduler -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:04:59.700 * Looking for test storage... 00:04:59.700 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:04:59.700 02:00:08 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:04:59.700 02:00:08 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=60372 00:04:59.700 02:00:08 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:04:59.700 02:00:08 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 60372 00:04:59.700 02:00:08 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:04:59.700 02:00:08 event.event_scheduler -- common/autotest_common.sh@829 -- # '[' -z 60372 ']' 00:04:59.700 02:00:08 event.event_scheduler -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:59.700 02:00:08 event.event_scheduler -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:59.700 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:59.700 02:00:08 event.event_scheduler -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:59.700 02:00:08 event.event_scheduler -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:59.700 02:00:08 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:59.700 [2024-07-23 02:00:08.418266] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:04:59.700 [2024-07-23 02:00:08.418506] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60372 ] 00:04:59.958 [2024-07-23 02:00:08.594059] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:00.216 [2024-07-23 02:00:08.847873] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:00.216 [2024-07-23 02:00:08.848055] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:00.216 [2024-07-23 02:00:08.848154] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:00.216 [2024-07-23 02:00:08.848175] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:00.784 02:00:09 event.event_scheduler -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:00.784 02:00:09 event.event_scheduler -- common/autotest_common.sh@862 -- # return 0 00:05:00.784 02:00:09 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:00.784 02:00:09 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:00.784 02:00:09 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:00.784 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:00.784 POWER: Cannot set governor of lcore 0 to userspace 00:05:00.784 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:00.784 POWER: Cannot set governor of lcore 0 to performance 00:05:00.784 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:00.784 POWER: Cannot set governor of lcore 0 to userspace 00:05:00.784 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:00.784 POWER: Cannot set governor of lcore 0 to userspace 00:05:00.784 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:05:00.784 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:05:00.784 POWER: Unable to set Power Management Environment for lcore 0 00:05:00.784 [2024-07-23 02:00:09.270566] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:05:00.784 [2024-07-23 02:00:09.270587] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:05:00.784 [2024-07-23 02:00:09.270603] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:05:00.784 [2024-07-23 02:00:09.270640] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:00.784 [2024-07-23 02:00:09.270655] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:00.784 [2024-07-23 02:00:09.270667] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:00.784 02:00:09 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:00.784 02:00:09 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:00.784 02:00:09 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:00.784 02:00:09 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:00.784 [2024-07-23 02:00:09.534507] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:00.784 02:00:09 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:00.784 02:00:09 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:00.784 02:00:09 event.event_scheduler -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:00.784 02:00:09 event.event_scheduler -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:00.784 02:00:09 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:00.784 ************************************ 00:05:00.784 START TEST scheduler_create_thread 00:05:00.784 ************************************ 00:05:00.784 02:00:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1123 -- # scheduler_create_thread 00:05:00.784 02:00:09 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:00.784 02:00:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:00.784 02:00:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:01.042 2 00:05:01.042 02:00:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:01.042 02:00:09 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:01.042 02:00:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:01.042 02:00:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:01.042 3 00:05:01.042 02:00:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:01.042 02:00:09 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:01.042 02:00:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:01.042 02:00:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:01.042 4 00:05:01.042 02:00:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:01.043 02:00:09 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:01.043 02:00:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:01.043 02:00:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:01.043 5 00:05:01.043 02:00:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:01.043 02:00:09 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:01.043 02:00:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:01.043 02:00:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:01.043 6 00:05:01.043 02:00:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:01.043 02:00:09 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:01.043 02:00:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:01.043 02:00:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:01.043 7 00:05:01.043 02:00:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:01.043 02:00:09 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:01.043 02:00:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:01.043 02:00:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:01.043 8 00:05:01.043 02:00:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:01.043 02:00:09 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:01.043 02:00:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:01.043 02:00:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:01.043 9 00:05:01.043 02:00:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:01.043 02:00:09 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:01.043 02:00:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:01.043 02:00:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:01.043 10 00:05:01.043 02:00:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:01.043 02:00:09 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:01.043 02:00:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:01.043 02:00:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:01.043 02:00:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:01.043 02:00:09 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:01.043 02:00:09 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:01.043 02:00:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:01.043 02:00:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:01.043 02:00:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:01.043 02:00:09 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:01.043 02:00:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:01.043 02:00:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:01.977 02:00:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:01.977 02:00:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:01.977 02:00:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:01.977 02:00:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:01.977 02:00:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:02.953 02:00:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:02.953 00:05:02.953 real 0m2.138s 00:05:02.953 user 0m0.018s 00:05:02.953 sys 0m0.007s 00:05:02.953 02:00:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:02.953 02:00:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:02.953 ************************************ 00:05:02.953 END TEST scheduler_create_thread 00:05:02.953 ************************************ 00:05:02.953 02:00:11 event.event_scheduler -- common/autotest_common.sh@1142 -- # return 0 00:05:03.211 02:00:11 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:03.211 02:00:11 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 60372 00:05:03.211 02:00:11 event.event_scheduler -- common/autotest_common.sh@948 -- # '[' -z 60372 ']' 00:05:03.211 02:00:11 event.event_scheduler -- common/autotest_common.sh@952 -- # kill -0 60372 00:05:03.211 02:00:11 event.event_scheduler -- common/autotest_common.sh@953 -- # uname 00:05:03.211 02:00:11 event.event_scheduler -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:03.211 02:00:11 event.event_scheduler -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60372 00:05:03.211 02:00:11 event.event_scheduler -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:05:03.211 02:00:11 event.event_scheduler -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:05:03.211 killing process with pid 60372 00:05:03.211 02:00:11 event.event_scheduler -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60372' 00:05:03.211 02:00:11 event.event_scheduler -- common/autotest_common.sh@967 -- # kill 60372 00:05:03.211 02:00:11 event.event_scheduler -- common/autotest_common.sh@972 -- # wait 60372 00:05:03.469 [2024-07-23 02:00:12.166816] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:04.848 00:05:04.848 real 0m5.051s 00:05:04.848 user 0m7.990s 00:05:04.848 sys 0m0.461s 00:05:04.848 02:00:13 event.event_scheduler -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:04.849 02:00:13 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:04.849 ************************************ 00:05:04.849 END TEST event_scheduler 00:05:04.849 ************************************ 00:05:04.849 02:00:13 event -- common/autotest_common.sh@1142 -- # return 0 00:05:04.849 02:00:13 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:04.849 02:00:13 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:04.849 02:00:13 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:04.849 02:00:13 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:04.849 02:00:13 event -- common/autotest_common.sh@10 -- # set +x 00:05:04.850 ************************************ 00:05:04.850 START TEST app_repeat 00:05:04.850 ************************************ 00:05:04.850 02:00:13 event.app_repeat -- common/autotest_common.sh@1123 -- # app_repeat_test 00:05:04.850 02:00:13 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:04.850 02:00:13 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:04.850 02:00:13 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:04.850 02:00:13 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:04.850 02:00:13 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:04.850 02:00:13 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:04.850 02:00:13 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:04.850 02:00:13 event.app_repeat -- event/event.sh@19 -- # repeat_pid=60478 00:05:04.850 02:00:13 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:04.850 Process app_repeat pid: 60478 00:05:04.850 02:00:13 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 60478' 00:05:04.851 02:00:13 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:04.851 02:00:13 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:04.851 spdk_app_start Round 0 00:05:04.851 02:00:13 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:04.851 02:00:13 event.app_repeat -- event/event.sh@25 -- # waitforlisten 60478 /var/tmp/spdk-nbd.sock 00:05:04.851 02:00:13 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 60478 ']' 00:05:04.851 02:00:13 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:04.851 02:00:13 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:04.851 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:04.851 02:00:13 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:04.851 02:00:13 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:04.851 02:00:13 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:04.851 [2024-07-23 02:00:13.360748] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:05:04.851 [2024-07-23 02:00:13.360932] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60478 ] 00:05:04.851 [2024-07-23 02:00:13.535143] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:05.113 [2024-07-23 02:00:13.717673] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:05.113 [2024-07-23 02:00:13.717686] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:05.681 02:00:14 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:05.681 02:00:14 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:05.681 02:00:14 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:05.940 Malloc0 00:05:05.940 02:00:14 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:06.199 Malloc1 00:05:06.199 02:00:14 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:06.199 02:00:14 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:06.199 02:00:14 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:06.199 02:00:14 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:06.199 02:00:14 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:06.199 02:00:14 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:06.199 02:00:14 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:06.199 02:00:14 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:06.199 02:00:14 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:06.199 02:00:14 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:06.199 02:00:14 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:06.199 02:00:14 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:06.199 02:00:14 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:06.199 02:00:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:06.199 02:00:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:06.199 02:00:14 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:06.458 /dev/nbd0 00:05:06.458 02:00:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:06.458 02:00:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:06.458 02:00:15 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:06.458 02:00:15 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:06.458 02:00:15 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:06.458 02:00:15 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:06.458 02:00:15 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:06.458 02:00:15 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:06.458 02:00:15 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:06.458 02:00:15 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:06.458 02:00:15 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:06.458 1+0 records in 00:05:06.458 1+0 records out 00:05:06.458 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000420782 s, 9.7 MB/s 00:05:06.458 02:00:15 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:06.458 02:00:15 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:06.458 02:00:15 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:06.458 02:00:15 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:06.458 02:00:15 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:06.458 02:00:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:06.458 02:00:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:06.458 02:00:15 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:06.718 /dev/nbd1 00:05:06.718 02:00:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:06.718 02:00:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:06.718 02:00:15 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:06.718 02:00:15 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:06.718 02:00:15 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:06.718 02:00:15 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:06.718 02:00:15 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:06.718 02:00:15 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:06.718 02:00:15 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:06.718 02:00:15 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:06.718 02:00:15 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:06.718 1+0 records in 00:05:06.718 1+0 records out 00:05:06.718 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000327771 s, 12.5 MB/s 00:05:06.718 02:00:15 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:06.718 02:00:15 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:06.718 02:00:15 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:06.718 02:00:15 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:06.718 02:00:15 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:06.718 02:00:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:06.718 02:00:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:06.718 02:00:15 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:06.718 02:00:15 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:06.718 02:00:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:06.977 02:00:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:06.977 { 00:05:06.977 "nbd_device": "/dev/nbd0", 00:05:06.977 "bdev_name": "Malloc0" 00:05:06.977 }, 00:05:06.977 { 00:05:06.977 "nbd_device": "/dev/nbd1", 00:05:06.977 "bdev_name": "Malloc1" 00:05:06.977 } 00:05:06.977 ]' 00:05:06.977 02:00:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:06.977 { 00:05:06.977 "nbd_device": "/dev/nbd0", 00:05:06.977 "bdev_name": "Malloc0" 00:05:06.977 }, 00:05:06.977 { 00:05:06.977 "nbd_device": "/dev/nbd1", 00:05:06.977 "bdev_name": "Malloc1" 00:05:06.977 } 00:05:06.977 ]' 00:05:06.977 02:00:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:06.977 02:00:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:06.977 /dev/nbd1' 00:05:06.977 02:00:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:06.977 /dev/nbd1' 00:05:06.977 02:00:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:06.977 02:00:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:06.977 02:00:15 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:06.977 02:00:15 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:06.977 02:00:15 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:06.977 02:00:15 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:06.977 02:00:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:06.977 02:00:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:06.977 02:00:15 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:06.977 02:00:15 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:06.977 02:00:15 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:06.977 02:00:15 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:06.977 256+0 records in 00:05:06.977 256+0 records out 00:05:06.977 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0040894 s, 256 MB/s 00:05:06.977 02:00:15 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:06.977 02:00:15 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:06.977 256+0 records in 00:05:06.977 256+0 records out 00:05:06.977 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0281535 s, 37.2 MB/s 00:05:06.977 02:00:15 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:06.977 02:00:15 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:07.236 256+0 records in 00:05:07.236 256+0 records out 00:05:07.236 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0290762 s, 36.1 MB/s 00:05:07.236 02:00:15 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:07.236 02:00:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:07.236 02:00:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:07.236 02:00:15 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:07.236 02:00:15 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:07.236 02:00:15 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:07.236 02:00:15 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:07.236 02:00:15 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:07.236 02:00:15 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:07.236 02:00:15 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:07.236 02:00:15 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:07.236 02:00:15 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:07.236 02:00:15 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:07.236 02:00:15 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:07.236 02:00:15 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:07.236 02:00:15 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:07.236 02:00:15 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:07.236 02:00:15 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:07.236 02:00:15 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:07.496 02:00:16 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:07.496 02:00:16 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:07.496 02:00:16 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:07.496 02:00:16 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:07.496 02:00:16 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:07.496 02:00:16 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:07.496 02:00:16 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:07.496 02:00:16 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:07.496 02:00:16 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:07.496 02:00:16 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:07.761 02:00:16 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:07.761 02:00:16 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:07.761 02:00:16 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:07.761 02:00:16 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:07.761 02:00:16 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:07.761 02:00:16 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:07.761 02:00:16 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:07.761 02:00:16 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:07.761 02:00:16 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:07.761 02:00:16 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:07.761 02:00:16 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:07.761 02:00:16 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:07.761 02:00:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:07.761 02:00:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:08.022 02:00:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:08.022 02:00:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:08.022 02:00:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:08.022 02:00:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:08.022 02:00:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:08.022 02:00:16 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:08.022 02:00:16 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:08.022 02:00:16 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:08.022 02:00:16 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:08.022 02:00:16 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:08.281 02:00:16 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:09.216 [2024-07-23 02:00:17.933359] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:09.475 [2024-07-23 02:00:18.108162] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:09.475 [2024-07-23 02:00:18.108171] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:09.736 [2024-07-23 02:00:18.264526] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:09.736 [2024-07-23 02:00:18.264619] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:11.682 spdk_app_start Round 1 00:05:11.682 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:11.682 02:00:19 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:11.682 02:00:19 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:11.682 02:00:19 event.app_repeat -- event/event.sh@25 -- # waitforlisten 60478 /var/tmp/spdk-nbd.sock 00:05:11.682 02:00:19 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 60478 ']' 00:05:11.682 02:00:19 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:11.682 02:00:19 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:11.682 02:00:19 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:11.682 02:00:19 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:11.682 02:00:19 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:11.682 02:00:20 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:11.682 02:00:20 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:11.682 02:00:20 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:11.941 Malloc0 00:05:11.941 02:00:20 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:12.199 Malloc1 00:05:12.199 02:00:20 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:12.199 02:00:20 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:12.199 02:00:20 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:12.199 02:00:20 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:12.199 02:00:20 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:12.199 02:00:20 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:12.199 02:00:20 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:12.199 02:00:20 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:12.199 02:00:20 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:12.200 02:00:20 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:12.200 02:00:20 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:12.200 02:00:20 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:12.200 02:00:20 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:12.200 02:00:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:12.200 02:00:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:12.200 02:00:20 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:12.200 /dev/nbd0 00:05:12.458 02:00:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:12.458 02:00:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:12.458 02:00:20 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:12.458 02:00:20 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:12.458 02:00:20 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:12.458 02:00:20 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:12.458 02:00:20 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:12.458 02:00:20 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:12.458 02:00:20 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:12.458 02:00:20 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:12.458 02:00:20 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:12.458 1+0 records in 00:05:12.458 1+0 records out 00:05:12.458 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00033045 s, 12.4 MB/s 00:05:12.458 02:00:21 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:12.458 02:00:21 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:12.459 02:00:21 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:12.459 02:00:21 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:12.459 02:00:21 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:12.459 02:00:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:12.459 02:00:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:12.459 02:00:21 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:12.718 /dev/nbd1 00:05:12.718 02:00:21 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:12.718 02:00:21 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:12.718 02:00:21 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:12.718 02:00:21 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:12.718 02:00:21 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:12.718 02:00:21 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:12.718 02:00:21 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:12.718 02:00:21 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:12.718 02:00:21 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:12.718 02:00:21 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:12.718 02:00:21 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:12.718 1+0 records in 00:05:12.718 1+0 records out 00:05:12.718 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000301794 s, 13.6 MB/s 00:05:12.718 02:00:21 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:12.718 02:00:21 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:12.718 02:00:21 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:12.718 02:00:21 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:12.718 02:00:21 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:12.718 02:00:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:12.718 02:00:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:12.718 02:00:21 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:12.718 02:00:21 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:12.718 02:00:21 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:12.976 02:00:21 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:12.976 { 00:05:12.976 "nbd_device": "/dev/nbd0", 00:05:12.976 "bdev_name": "Malloc0" 00:05:12.976 }, 00:05:12.977 { 00:05:12.977 "nbd_device": "/dev/nbd1", 00:05:12.977 "bdev_name": "Malloc1" 00:05:12.977 } 00:05:12.977 ]' 00:05:12.977 02:00:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:12.977 { 00:05:12.977 "nbd_device": "/dev/nbd0", 00:05:12.977 "bdev_name": "Malloc0" 00:05:12.977 }, 00:05:12.977 { 00:05:12.977 "nbd_device": "/dev/nbd1", 00:05:12.977 "bdev_name": "Malloc1" 00:05:12.977 } 00:05:12.977 ]' 00:05:12.977 02:00:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:12.977 02:00:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:12.977 /dev/nbd1' 00:05:12.977 02:00:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:12.977 /dev/nbd1' 00:05:12.977 02:00:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:12.977 02:00:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:12.977 02:00:21 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:12.977 02:00:21 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:12.977 02:00:21 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:12.977 02:00:21 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:12.977 02:00:21 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:12.977 02:00:21 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:12.977 02:00:21 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:12.977 02:00:21 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:12.977 02:00:21 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:12.977 02:00:21 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:12.977 256+0 records in 00:05:12.977 256+0 records out 00:05:12.977 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00769466 s, 136 MB/s 00:05:12.977 02:00:21 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:12.977 02:00:21 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:12.977 256+0 records in 00:05:12.977 256+0 records out 00:05:12.977 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0267385 s, 39.2 MB/s 00:05:12.977 02:00:21 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:12.977 02:00:21 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:12.977 256+0 records in 00:05:12.977 256+0 records out 00:05:12.977 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0312825 s, 33.5 MB/s 00:05:12.977 02:00:21 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:12.977 02:00:21 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:12.977 02:00:21 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:12.977 02:00:21 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:12.977 02:00:21 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:12.977 02:00:21 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:12.977 02:00:21 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:12.977 02:00:21 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:12.977 02:00:21 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:12.977 02:00:21 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:12.977 02:00:21 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:12.977 02:00:21 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:12.977 02:00:21 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:12.977 02:00:21 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:12.977 02:00:21 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:12.977 02:00:21 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:12.977 02:00:21 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:12.977 02:00:21 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:12.977 02:00:21 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:13.236 02:00:21 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:13.236 02:00:21 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:13.236 02:00:21 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:13.236 02:00:21 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:13.236 02:00:21 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:13.236 02:00:21 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:13.236 02:00:21 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:13.236 02:00:21 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:13.236 02:00:21 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:13.236 02:00:21 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:13.495 02:00:22 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:13.495 02:00:22 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:13.495 02:00:22 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:13.495 02:00:22 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:13.495 02:00:22 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:13.495 02:00:22 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:13.495 02:00:22 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:13.495 02:00:22 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:13.495 02:00:22 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:13.495 02:00:22 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:13.495 02:00:22 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:13.754 02:00:22 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:13.754 02:00:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:13.754 02:00:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:13.754 02:00:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:13.754 02:00:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:13.754 02:00:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:13.754 02:00:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:13.754 02:00:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:13.754 02:00:22 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:13.754 02:00:22 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:13.754 02:00:22 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:13.754 02:00:22 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:13.754 02:00:22 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:14.320 02:00:22 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:15.254 [2024-07-23 02:00:23.917607] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:15.513 [2024-07-23 02:00:24.102081] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:15.513 [2024-07-23 02:00:24.102085] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:15.772 [2024-07-23 02:00:24.308173] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:15.772 [2024-07-23 02:00:24.308250] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:17.149 spdk_app_start Round 2 00:05:17.149 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:17.149 02:00:25 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:17.149 02:00:25 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:17.149 02:00:25 event.app_repeat -- event/event.sh@25 -- # waitforlisten 60478 /var/tmp/spdk-nbd.sock 00:05:17.149 02:00:25 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 60478 ']' 00:05:17.149 02:00:25 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:17.149 02:00:25 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:17.150 02:00:25 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:17.150 02:00:25 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:17.150 02:00:25 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:17.409 02:00:26 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:17.409 02:00:26 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:17.409 02:00:26 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:17.668 Malloc0 00:05:17.668 02:00:26 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:17.927 Malloc1 00:05:17.927 02:00:26 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:17.927 02:00:26 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:17.927 02:00:26 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:17.927 02:00:26 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:17.927 02:00:26 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:17.927 02:00:26 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:17.927 02:00:26 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:17.927 02:00:26 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:17.927 02:00:26 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:17.927 02:00:26 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:17.927 02:00:26 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:17.927 02:00:26 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:17.927 02:00:26 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:17.927 02:00:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:17.927 02:00:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:17.927 02:00:26 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:18.186 /dev/nbd0 00:05:18.186 02:00:26 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:18.186 02:00:26 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:18.186 02:00:26 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:18.186 02:00:26 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:18.186 02:00:26 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:18.186 02:00:26 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:18.186 02:00:26 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:18.186 02:00:26 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:18.186 02:00:26 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:18.186 02:00:26 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:18.186 02:00:26 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:18.186 1+0 records in 00:05:18.186 1+0 records out 00:05:18.186 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000459765 s, 8.9 MB/s 00:05:18.186 02:00:26 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:18.186 02:00:26 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:18.186 02:00:26 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:18.186 02:00:26 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:18.186 02:00:26 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:18.186 02:00:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:18.186 02:00:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:18.186 02:00:26 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:18.445 /dev/nbd1 00:05:18.445 02:00:27 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:18.445 02:00:27 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:18.445 02:00:27 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:18.445 02:00:27 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:18.445 02:00:27 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:18.445 02:00:27 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:18.445 02:00:27 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:18.445 02:00:27 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:18.445 02:00:27 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:18.445 02:00:27 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:18.445 02:00:27 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:18.445 1+0 records in 00:05:18.445 1+0 records out 00:05:18.445 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000267003 s, 15.3 MB/s 00:05:18.445 02:00:27 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:18.445 02:00:27 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:18.445 02:00:27 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:18.445 02:00:27 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:18.445 02:00:27 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:18.445 02:00:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:18.445 02:00:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:18.445 02:00:27 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:18.446 02:00:27 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:18.446 02:00:27 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:18.705 02:00:27 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:18.705 { 00:05:18.705 "nbd_device": "/dev/nbd0", 00:05:18.705 "bdev_name": "Malloc0" 00:05:18.705 }, 00:05:18.705 { 00:05:18.705 "nbd_device": "/dev/nbd1", 00:05:18.705 "bdev_name": "Malloc1" 00:05:18.705 } 00:05:18.705 ]' 00:05:18.705 02:00:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:18.705 { 00:05:18.705 "nbd_device": "/dev/nbd0", 00:05:18.705 "bdev_name": "Malloc0" 00:05:18.705 }, 00:05:18.705 { 00:05:18.705 "nbd_device": "/dev/nbd1", 00:05:18.705 "bdev_name": "Malloc1" 00:05:18.705 } 00:05:18.705 ]' 00:05:18.705 02:00:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:18.705 02:00:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:18.705 /dev/nbd1' 00:05:18.705 02:00:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:18.705 /dev/nbd1' 00:05:18.705 02:00:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:18.705 02:00:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:18.705 02:00:27 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:18.705 02:00:27 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:18.705 02:00:27 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:18.705 02:00:27 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:18.705 02:00:27 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:18.705 02:00:27 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:18.705 02:00:27 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:18.705 02:00:27 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:18.705 02:00:27 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:18.705 02:00:27 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:18.705 256+0 records in 00:05:18.705 256+0 records out 00:05:18.705 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00582706 s, 180 MB/s 00:05:18.705 02:00:27 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:18.705 02:00:27 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:18.705 256+0 records in 00:05:18.705 256+0 records out 00:05:18.705 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0247823 s, 42.3 MB/s 00:05:18.705 02:00:27 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:18.705 02:00:27 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:18.963 256+0 records in 00:05:18.963 256+0 records out 00:05:18.963 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0365701 s, 28.7 MB/s 00:05:18.963 02:00:27 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:18.963 02:00:27 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:18.963 02:00:27 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:18.963 02:00:27 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:18.963 02:00:27 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:18.963 02:00:27 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:18.963 02:00:27 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:18.963 02:00:27 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:18.963 02:00:27 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:18.963 02:00:27 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:18.963 02:00:27 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:18.963 02:00:27 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:18.963 02:00:27 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:18.963 02:00:27 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:18.963 02:00:27 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:18.963 02:00:27 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:18.963 02:00:27 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:18.963 02:00:27 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:18.963 02:00:27 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:19.221 02:00:27 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:19.221 02:00:27 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:19.221 02:00:27 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:19.221 02:00:27 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:19.221 02:00:27 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:19.221 02:00:27 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:19.221 02:00:27 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:19.221 02:00:27 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:19.221 02:00:27 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:19.221 02:00:27 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:19.221 02:00:27 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:19.221 02:00:27 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:19.221 02:00:27 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:19.221 02:00:27 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:19.221 02:00:27 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:19.221 02:00:27 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:19.221 02:00:27 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:19.221 02:00:27 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:19.221 02:00:27 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:19.221 02:00:27 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:19.221 02:00:27 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:19.788 02:00:28 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:19.788 02:00:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:19.788 02:00:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:19.788 02:00:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:19.788 02:00:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:19.788 02:00:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:19.788 02:00:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:19.788 02:00:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:19.788 02:00:28 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:19.788 02:00:28 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:19.788 02:00:28 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:19.788 02:00:28 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:19.788 02:00:28 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:20.047 02:00:28 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:20.984 [2024-07-23 02:00:29.711653] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:21.242 [2024-07-23 02:00:29.878949] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:21.242 [2024-07-23 02:00:29.878961] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:21.501 [2024-07-23 02:00:30.037229] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:21.501 [2024-07-23 02:00:30.037320] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:23.404 02:00:31 event.app_repeat -- event/event.sh@38 -- # waitforlisten 60478 /var/tmp/spdk-nbd.sock 00:05:23.404 02:00:31 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 60478 ']' 00:05:23.404 02:00:31 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:23.404 02:00:31 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:23.404 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:23.404 02:00:31 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:23.404 02:00:31 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:23.404 02:00:31 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:23.404 02:00:31 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:23.404 02:00:31 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:23.404 02:00:31 event.app_repeat -- event/event.sh@39 -- # killprocess 60478 00:05:23.404 02:00:31 event.app_repeat -- common/autotest_common.sh@948 -- # '[' -z 60478 ']' 00:05:23.404 02:00:31 event.app_repeat -- common/autotest_common.sh@952 -- # kill -0 60478 00:05:23.404 02:00:31 event.app_repeat -- common/autotest_common.sh@953 -- # uname 00:05:23.404 02:00:31 event.app_repeat -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:23.404 02:00:31 event.app_repeat -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60478 00:05:23.404 02:00:32 event.app_repeat -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:23.404 02:00:32 event.app_repeat -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:23.404 killing process with pid 60478 00:05:23.404 02:00:32 event.app_repeat -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60478' 00:05:23.404 02:00:32 event.app_repeat -- common/autotest_common.sh@967 -- # kill 60478 00:05:23.404 02:00:32 event.app_repeat -- common/autotest_common.sh@972 -- # wait 60478 00:05:24.342 spdk_app_start is called in Round 0. 00:05:24.342 Shutdown signal received, stop current app iteration 00:05:24.342 Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 reinitialization... 00:05:24.342 spdk_app_start is called in Round 1. 00:05:24.342 Shutdown signal received, stop current app iteration 00:05:24.342 Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 reinitialization... 00:05:24.342 spdk_app_start is called in Round 2. 00:05:24.342 Shutdown signal received, stop current app iteration 00:05:24.342 Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 reinitialization... 00:05:24.342 spdk_app_start is called in Round 3. 00:05:24.342 Shutdown signal received, stop current app iteration 00:05:24.342 02:00:32 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:24.342 02:00:32 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:24.342 00:05:24.342 real 0m19.622s 00:05:24.342 user 0m41.920s 00:05:24.342 sys 0m2.748s 00:05:24.342 02:00:32 event.app_repeat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:24.342 02:00:32 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:24.342 ************************************ 00:05:24.342 END TEST app_repeat 00:05:24.342 ************************************ 00:05:24.342 02:00:32 event -- common/autotest_common.sh@1142 -- # return 0 00:05:24.342 02:00:32 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:24.342 02:00:32 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:24.342 02:00:32 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:24.342 02:00:32 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:24.342 02:00:32 event -- common/autotest_common.sh@10 -- # set +x 00:05:24.342 ************************************ 00:05:24.342 START TEST cpu_locks 00:05:24.342 ************************************ 00:05:24.342 02:00:32 event.cpu_locks -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:24.342 * Looking for test storage... 00:05:24.342 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:24.342 02:00:33 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:24.342 02:00:33 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:24.342 02:00:33 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:24.342 02:00:33 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:24.342 02:00:33 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:24.342 02:00:33 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:24.342 02:00:33 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:24.342 ************************************ 00:05:24.342 START TEST default_locks 00:05:24.342 ************************************ 00:05:24.342 02:00:33 event.cpu_locks.default_locks -- common/autotest_common.sh@1123 -- # default_locks 00:05:24.342 02:00:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=60917 00:05:24.342 02:00:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:24.342 02:00:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 60917 00:05:24.342 02:00:33 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 60917 ']' 00:05:24.342 02:00:33 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:24.342 02:00:33 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:24.343 02:00:33 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:24.343 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:24.343 02:00:33 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:24.343 02:00:33 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:24.602 [2024-07-23 02:00:33.237661] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:05:24.602 [2024-07-23 02:00:33.237897] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60917 ] 00:05:24.861 [2024-07-23 02:00:33.408275] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:24.861 [2024-07-23 02:00:33.605869] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:25.798 02:00:34 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:25.798 02:00:34 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 0 00:05:25.798 02:00:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 60917 00:05:25.798 02:00:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 60917 00:05:25.798 02:00:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:26.057 02:00:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 60917 00:05:26.057 02:00:34 event.cpu_locks.default_locks -- common/autotest_common.sh@948 -- # '[' -z 60917 ']' 00:05:26.057 02:00:34 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # kill -0 60917 00:05:26.057 02:00:34 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # uname 00:05:26.057 02:00:34 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:26.057 02:00:34 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60917 00:05:26.057 02:00:34 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:26.057 02:00:34 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:26.057 killing process with pid 60917 00:05:26.057 02:00:34 event.cpu_locks.default_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60917' 00:05:26.057 02:00:34 event.cpu_locks.default_locks -- common/autotest_common.sh@967 -- # kill 60917 00:05:26.057 02:00:34 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # wait 60917 00:05:27.963 02:00:36 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 60917 00:05:27.963 02:00:36 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:05:27.963 02:00:36 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 60917 00:05:27.963 02:00:36 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:05:27.963 02:00:36 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:27.963 02:00:36 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:05:27.963 02:00:36 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:27.963 02:00:36 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 60917 00:05:27.963 02:00:36 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 60917 ']' 00:05:27.963 02:00:36 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:27.963 02:00:36 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:27.963 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:27.963 02:00:36 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:27.963 02:00:36 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:27.963 02:00:36 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:27.963 ERROR: process (pid: 60917) is no longer running 00:05:27.963 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (60917) - No such process 00:05:27.963 02:00:36 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:27.963 02:00:36 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 1 00:05:27.963 02:00:36 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:05:27.963 02:00:36 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:27.963 02:00:36 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:27.963 02:00:36 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:27.963 02:00:36 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:27.963 02:00:36 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:27.963 02:00:36 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:27.963 02:00:36 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:27.963 00:05:27.963 real 0m3.505s 00:05:27.963 user 0m3.375s 00:05:27.963 sys 0m0.703s 00:05:27.963 02:00:36 event.cpu_locks.default_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:27.963 02:00:36 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:27.963 ************************************ 00:05:27.963 END TEST default_locks 00:05:27.963 ************************************ 00:05:27.963 02:00:36 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:27.963 02:00:36 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:27.963 02:00:36 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:27.963 02:00:36 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:27.963 02:00:36 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:27.963 ************************************ 00:05:27.963 START TEST default_locks_via_rpc 00:05:27.963 ************************************ 00:05:27.963 02:00:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1123 -- # default_locks_via_rpc 00:05:27.963 02:00:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=60987 00:05:27.963 02:00:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 60987 00:05:27.963 02:00:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:27.963 02:00:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 60987 ']' 00:05:27.963 02:00:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:27.963 02:00:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:27.963 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:27.963 02:00:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:27.963 02:00:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:27.963 02:00:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:28.222 [2024-07-23 02:00:36.807949] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:05:28.222 [2024-07-23 02:00:36.808150] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60987 ] 00:05:28.222 [2024-07-23 02:00:36.965273] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:28.481 [2024-07-23 02:00:37.158941] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:29.448 02:00:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:29.448 02:00:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:29.448 02:00:37 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:29.448 02:00:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:29.448 02:00:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:29.448 02:00:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:29.448 02:00:37 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:29.448 02:00:37 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:29.448 02:00:37 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:29.448 02:00:37 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:29.448 02:00:37 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:29.448 02:00:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:29.448 02:00:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:29.448 02:00:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:29.448 02:00:37 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 60987 00:05:29.448 02:00:37 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:29.448 02:00:37 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 60987 00:05:29.714 02:00:38 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 60987 00:05:29.714 02:00:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@948 -- # '[' -z 60987 ']' 00:05:29.714 02:00:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # kill -0 60987 00:05:29.714 02:00:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # uname 00:05:29.714 02:00:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:29.714 02:00:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 60987 00:05:29.714 02:00:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:29.714 02:00:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:29.714 killing process with pid 60987 00:05:29.714 02:00:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60987' 00:05:29.714 02:00:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@967 -- # kill 60987 00:05:29.714 02:00:38 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # wait 60987 00:05:31.619 00:05:31.619 real 0m3.520s 00:05:31.619 user 0m3.452s 00:05:31.619 sys 0m0.692s 00:05:31.619 02:00:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:31.619 02:00:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:31.619 ************************************ 00:05:31.619 END TEST default_locks_via_rpc 00:05:31.619 ************************************ 00:05:31.619 02:00:40 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:31.619 02:00:40 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:31.619 02:00:40 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:31.619 02:00:40 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:31.619 02:00:40 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:31.619 ************************************ 00:05:31.619 START TEST non_locking_app_on_locked_coremask 00:05:31.619 ************************************ 00:05:31.619 02:00:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # non_locking_app_on_locked_coremask 00:05:31.619 02:00:40 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=61055 00:05:31.619 02:00:40 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:31.619 02:00:40 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 61055 /var/tmp/spdk.sock 00:05:31.619 02:00:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 61055 ']' 00:05:31.619 02:00:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:31.619 02:00:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:31.619 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:31.619 02:00:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:31.619 02:00:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:31.619 02:00:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:31.619 [2024-07-23 02:00:40.326967] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:05:31.620 [2024-07-23 02:00:40.327156] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61055 ] 00:05:31.878 [2024-07-23 02:00:40.479792] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:32.137 [2024-07-23 02:00:40.671071] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.705 02:00:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:32.705 02:00:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:32.705 02:00:41 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=61071 00:05:32.705 02:00:41 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 61071 /var/tmp/spdk2.sock 00:05:32.705 02:00:41 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:32.705 02:00:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 61071 ']' 00:05:32.705 02:00:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:32.705 02:00:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:32.705 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:32.705 02:00:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:32.705 02:00:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:32.705 02:00:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:32.964 [2024-07-23 02:00:41.498417] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:05:32.964 [2024-07-23 02:00:41.498612] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61071 ] 00:05:32.964 [2024-07-23 02:00:41.662632] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:32.964 [2024-07-23 02:00:41.662679] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:33.532 [2024-07-23 02:00:42.074856] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:35.436 02:00:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:35.436 02:00:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:35.436 02:00:44 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 61055 00:05:35.436 02:00:44 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 61055 00:05:35.436 02:00:44 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:36.371 02:00:44 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 61055 00:05:36.371 02:00:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 61055 ']' 00:05:36.371 02:00:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 61055 00:05:36.371 02:00:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:36.371 02:00:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:36.371 02:00:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 61055 00:05:36.371 02:00:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:36.371 02:00:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:36.371 killing process with pid 61055 00:05:36.371 02:00:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 61055' 00:05:36.371 02:00:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 61055 00:05:36.371 02:00:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 61055 00:05:40.562 02:00:48 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 61071 00:05:40.562 02:00:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 61071 ']' 00:05:40.562 02:00:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 61071 00:05:40.562 02:00:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:40.562 02:00:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:40.562 02:00:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 61071 00:05:40.563 02:00:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:40.563 02:00:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:40.563 killing process with pid 61071 00:05:40.563 02:00:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 61071' 00:05:40.563 02:00:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 61071 00:05:40.563 02:00:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 61071 00:05:41.939 00:05:41.939 real 0m10.232s 00:05:41.939 user 0m10.689s 00:05:41.939 sys 0m1.383s 00:05:41.939 02:00:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:41.939 ************************************ 00:05:41.939 END TEST non_locking_app_on_locked_coremask 00:05:41.939 ************************************ 00:05:41.939 02:00:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:41.939 02:00:50 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:41.939 02:00:50 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:41.939 02:00:50 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:41.939 02:00:50 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:41.939 02:00:50 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:41.939 ************************************ 00:05:41.939 START TEST locking_app_on_unlocked_coremask 00:05:41.939 ************************************ 00:05:41.939 02:00:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_unlocked_coremask 00:05:41.939 02:00:50 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=61208 00:05:41.940 02:00:50 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 61208 /var/tmp/spdk.sock 00:05:41.940 02:00:50 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:41.940 02:00:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 61208 ']' 00:05:41.940 02:00:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:41.940 02:00:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:41.940 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:41.940 02:00:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:41.940 02:00:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:41.940 02:00:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:41.940 [2024-07-23 02:00:50.604037] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:05:41.940 [2024-07-23 02:00:50.604209] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61208 ] 00:05:42.198 [2024-07-23 02:00:50.757772] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:42.198 [2024-07-23 02:00:50.757821] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:42.198 [2024-07-23 02:00:50.947262] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.135 02:00:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:43.135 02:00:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:43.135 02:00:51 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=61224 00:05:43.135 02:00:51 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 61224 /var/tmp/spdk2.sock 00:05:43.135 02:00:51 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:43.135 02:00:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 61224 ']' 00:05:43.135 02:00:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:43.135 02:00:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:43.135 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:43.135 02:00:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:43.135 02:00:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:43.135 02:00:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:43.135 [2024-07-23 02:00:51.786894] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:05:43.135 [2024-07-23 02:00:51.787101] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61224 ] 00:05:43.394 [2024-07-23 02:00:51.948593] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:43.654 [2024-07-23 02:00:52.342270] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.562 02:00:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:45.562 02:00:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:45.562 02:00:54 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 61224 00:05:45.562 02:00:54 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 61224 00:05:45.562 02:00:54 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:46.499 02:00:55 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 61208 00:05:46.499 02:00:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 61208 ']' 00:05:46.499 02:00:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 61208 00:05:46.499 02:00:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:46.499 02:00:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:46.499 02:00:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 61208 00:05:46.499 02:00:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:46.499 02:00:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:46.499 02:00:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 61208' 00:05:46.499 killing process with pid 61208 00:05:46.499 02:00:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 61208 00:05:46.499 02:00:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 61208 00:05:50.690 02:00:58 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 61224 00:05:50.690 02:00:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 61224 ']' 00:05:50.690 02:00:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 61224 00:05:50.690 02:00:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:50.690 02:00:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:50.690 02:00:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 61224 00:05:50.690 02:00:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:50.690 02:00:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:50.690 killing process with pid 61224 00:05:50.690 02:00:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 61224' 00:05:50.690 02:00:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 61224 00:05:50.690 02:00:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 61224 00:05:52.066 00:05:52.066 real 0m10.105s 00:05:52.066 user 0m10.534s 00:05:52.066 sys 0m1.376s 00:05:52.066 02:01:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:52.066 02:01:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:52.066 ************************************ 00:05:52.066 END TEST locking_app_on_unlocked_coremask 00:05:52.066 ************************************ 00:05:52.066 02:01:00 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:52.066 02:01:00 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:52.066 02:01:00 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:52.066 02:01:00 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:52.066 02:01:00 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:52.066 ************************************ 00:05:52.066 START TEST locking_app_on_locked_coremask 00:05:52.066 ************************************ 00:05:52.066 02:01:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_locked_coremask 00:05:52.066 02:01:00 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=61359 00:05:52.066 02:01:00 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 61359 /var/tmp/spdk.sock 00:05:52.066 02:01:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 61359 ']' 00:05:52.066 02:01:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:52.066 02:01:00 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:52.066 02:01:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:52.066 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:52.066 02:01:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:52.066 02:01:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:52.066 02:01:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:52.066 [2024-07-23 02:01:00.832632] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:05:52.066 [2024-07-23 02:01:00.832851] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61359 ] 00:05:52.325 [2024-07-23 02:01:00.996941] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:52.584 [2024-07-23 02:01:01.192111] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.154 02:01:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:53.154 02:01:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:53.154 02:01:01 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=61377 00:05:53.154 02:01:01 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 61377 /var/tmp/spdk2.sock 00:05:53.154 02:01:01 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:53.154 02:01:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:05:53.154 02:01:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 61377 /var/tmp/spdk2.sock 00:05:53.154 02:01:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:05:53.154 02:01:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:53.154 02:01:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:05:53.154 02:01:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:53.154 02:01:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 61377 /var/tmp/spdk2.sock 00:05:53.154 02:01:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 61377 ']' 00:05:53.154 02:01:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:53.154 02:01:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:53.154 02:01:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:53.154 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:53.154 02:01:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:53.154 02:01:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:53.413 [2024-07-23 02:01:02.029438] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:05:53.413 [2024-07-23 02:01:02.029637] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61377 ] 00:05:53.413 [2024-07-23 02:01:02.190171] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 61359 has claimed it. 00:05:53.413 [2024-07-23 02:01:02.190248] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:53.981 ERROR: process (pid: 61377) is no longer running 00:05:53.981 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (61377) - No such process 00:05:53.981 02:01:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:53.981 02:01:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 1 00:05:53.981 02:01:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:05:53.981 02:01:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:53.981 02:01:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:53.981 02:01:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:53.981 02:01:02 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 61359 00:05:53.981 02:01:02 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 61359 00:05:53.981 02:01:02 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:54.548 02:01:03 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 61359 00:05:54.549 02:01:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 61359 ']' 00:05:54.549 02:01:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 61359 00:05:54.549 02:01:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:54.549 02:01:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:54.549 02:01:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 61359 00:05:54.549 killing process with pid 61359 00:05:54.549 02:01:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:54.549 02:01:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:54.549 02:01:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 61359' 00:05:54.549 02:01:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 61359 00:05:54.549 02:01:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 61359 00:05:56.453 00:05:56.453 real 0m4.225s 00:05:56.453 user 0m4.448s 00:05:56.453 sys 0m0.893s 00:05:56.453 02:01:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:56.453 ************************************ 00:05:56.453 END TEST locking_app_on_locked_coremask 00:05:56.453 ************************************ 00:05:56.453 02:01:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:56.453 02:01:04 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:56.453 02:01:04 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:56.453 02:01:04 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:56.453 02:01:04 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:56.453 02:01:04 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:56.453 ************************************ 00:05:56.453 START TEST locking_overlapped_coremask 00:05:56.453 ************************************ 00:05:56.453 02:01:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask 00:05:56.453 02:01:04 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=61441 00:05:56.453 02:01:04 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 61441 /var/tmp/spdk.sock 00:05:56.453 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:56.453 02:01:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 61441 ']' 00:05:56.453 02:01:04 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:05:56.453 02:01:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:56.453 02:01:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:56.453 02:01:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:56.454 02:01:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:56.454 02:01:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:56.454 [2024-07-23 02:01:05.107560] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:05:56.454 [2024-07-23 02:01:05.107748] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61441 ] 00:05:56.713 [2024-07-23 02:01:05.270318] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:56.971 [2024-07-23 02:01:05.526606] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:56.971 [2024-07-23 02:01:05.526722] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.971 [2024-07-23 02:01:05.526731] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:57.538 02:01:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:57.538 02:01:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:57.538 02:01:06 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=61470 00:05:57.538 02:01:06 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:57.538 02:01:06 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 61470 /var/tmp/spdk2.sock 00:05:57.538 02:01:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:05:57.538 02:01:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 61470 /var/tmp/spdk2.sock 00:05:57.538 02:01:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:05:57.538 02:01:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:57.538 02:01:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:05:57.538 02:01:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:57.538 02:01:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 61470 /var/tmp/spdk2.sock 00:05:57.538 02:01:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 61470 ']' 00:05:57.538 02:01:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:57.538 02:01:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:57.538 02:01:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:57.538 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:57.538 02:01:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:57.538 02:01:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:57.796 [2024-07-23 02:01:06.403900] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:05:57.796 [2024-07-23 02:01:06.404478] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61470 ] 00:05:57.796 [2024-07-23 02:01:06.572837] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 61441 has claimed it. 00:05:57.796 [2024-07-23 02:01:06.572937] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:58.364 ERROR: process (pid: 61470) is no longer running 00:05:58.364 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (61470) - No such process 00:05:58.364 02:01:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:58.364 02:01:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 1 00:05:58.364 02:01:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:05:58.364 02:01:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:58.364 02:01:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:58.364 02:01:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:58.364 02:01:07 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:58.364 02:01:07 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:58.364 02:01:07 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:58.364 02:01:07 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:58.364 02:01:07 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 61441 00:05:58.364 02:01:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@948 -- # '[' -z 61441 ']' 00:05:58.364 02:01:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # kill -0 61441 00:05:58.364 02:01:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # uname 00:05:58.364 02:01:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:58.364 02:01:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 61441 00:05:58.364 02:01:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:58.364 02:01:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:58.364 02:01:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 61441' 00:05:58.364 killing process with pid 61441 00:05:58.364 02:01:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@967 -- # kill 61441 00:05:58.364 02:01:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # wait 61441 00:06:00.267 00:06:00.267 real 0m3.971s 00:06:00.267 user 0m10.171s 00:06:00.267 sys 0m0.667s 00:06:00.267 02:01:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:00.267 02:01:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:00.267 ************************************ 00:06:00.267 END TEST locking_overlapped_coremask 00:06:00.267 ************************************ 00:06:00.267 02:01:08 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:00.267 02:01:08 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:00.267 02:01:08 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:00.267 02:01:08 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:00.267 02:01:08 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:00.267 ************************************ 00:06:00.267 START TEST locking_overlapped_coremask_via_rpc 00:06:00.267 ************************************ 00:06:00.267 02:01:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask_via_rpc 00:06:00.267 02:01:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=61523 00:06:00.267 02:01:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:00.267 02:01:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 61523 /var/tmp/spdk.sock 00:06:00.267 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:00.267 02:01:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 61523 ']' 00:06:00.267 02:01:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:00.267 02:01:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:00.267 02:01:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:00.267 02:01:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:00.267 02:01:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:00.526 [2024-07-23 02:01:09.082855] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:06:00.526 [2024-07-23 02:01:09.083465] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61523 ] 00:06:00.526 [2024-07-23 02:01:09.237476] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:00.526 [2024-07-23 02:01:09.237699] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:00.784 [2024-07-23 02:01:09.446205] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:00.784 [2024-07-23 02:01:09.446328] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.784 [2024-07-23 02:01:09.446339] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:01.718 02:01:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:01.718 02:01:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:01.718 02:01:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=61547 00:06:01.718 02:01:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:01.718 02:01:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 61547 /var/tmp/spdk2.sock 00:06:01.718 02:01:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 61547 ']' 00:06:01.718 02:01:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:01.718 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:01.718 02:01:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:01.718 02:01:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:01.718 02:01:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:01.718 02:01:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:01.718 [2024-07-23 02:01:10.313566] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:06:01.718 [2024-07-23 02:01:10.313785] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61547 ] 00:06:01.718 [2024-07-23 02:01:10.488671] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:01.718 [2024-07-23 02:01:10.488718] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:02.284 [2024-07-23 02:01:10.925056] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:02.284 [2024-07-23 02:01:10.925149] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:02.284 [2024-07-23 02:01:10.925175] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:06:04.263 02:01:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:04.263 02:01:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:04.263 02:01:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:04.263 02:01:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:04.263 02:01:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:04.263 02:01:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:04.263 02:01:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:04.263 02:01:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:06:04.263 02:01:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:04.263 02:01:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:06:04.263 02:01:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:04.263 02:01:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:06:04.263 02:01:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:04.263 02:01:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:04.263 02:01:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:04.263 02:01:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:04.263 [2024-07-23 02:01:12.913768] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 61523 has claimed it. 00:06:04.263 request: 00:06:04.263 { 00:06:04.263 "method": "framework_enable_cpumask_locks", 00:06:04.263 "req_id": 1 00:06:04.263 } 00:06:04.263 Got JSON-RPC error response 00:06:04.263 response: 00:06:04.263 { 00:06:04.263 "code": -32603, 00:06:04.263 "message": "Failed to claim CPU core: 2" 00:06:04.263 } 00:06:04.263 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:04.263 02:01:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:06:04.263 02:01:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:06:04.263 02:01:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:04.263 02:01:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:04.263 02:01:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:04.263 02:01:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 61523 /var/tmp/spdk.sock 00:06:04.263 02:01:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 61523 ']' 00:06:04.263 02:01:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:04.263 02:01:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:04.263 02:01:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:04.263 02:01:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:04.263 02:01:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:04.521 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:04.521 02:01:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:04.521 02:01:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:04.521 02:01:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 61547 /var/tmp/spdk2.sock 00:06:04.521 02:01:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 61547 ']' 00:06:04.521 02:01:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:04.521 02:01:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:04.521 02:01:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:04.521 02:01:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:04.521 02:01:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:04.779 02:01:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:04.779 02:01:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:04.779 02:01:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:04.779 02:01:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:04.779 02:01:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:04.779 02:01:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:04.779 00:06:04.779 real 0m4.486s 00:06:04.779 user 0m1.432s 00:06:04.779 sys 0m0.242s 00:06:04.779 02:01:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:04.779 ************************************ 00:06:04.779 END TEST locking_overlapped_coremask_via_rpc 00:06:04.779 ************************************ 00:06:04.779 02:01:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:04.779 02:01:13 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:04.779 02:01:13 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:04.779 02:01:13 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 61523 ]] 00:06:04.779 02:01:13 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 61523 00:06:04.779 02:01:13 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 61523 ']' 00:06:04.779 02:01:13 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 61523 00:06:04.779 02:01:13 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:06:04.779 02:01:13 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:04.779 02:01:13 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 61523 00:06:04.779 killing process with pid 61523 00:06:04.779 02:01:13 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:04.779 02:01:13 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:04.779 02:01:13 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 61523' 00:06:04.779 02:01:13 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 61523 00:06:04.779 02:01:13 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 61523 00:06:07.306 02:01:15 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 61547 ]] 00:06:07.306 02:01:15 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 61547 00:06:07.306 02:01:15 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 61547 ']' 00:06:07.306 02:01:15 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 61547 00:06:07.306 02:01:15 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:06:07.306 02:01:15 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:07.306 02:01:15 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 61547 00:06:07.306 killing process with pid 61547 00:06:07.306 02:01:15 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:06:07.306 02:01:15 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:06:07.306 02:01:15 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 61547' 00:06:07.306 02:01:15 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 61547 00:06:07.306 02:01:15 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 61547 00:06:09.209 02:01:17 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:09.209 02:01:17 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:09.209 02:01:17 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 61523 ]] 00:06:09.209 02:01:17 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 61523 00:06:09.209 02:01:17 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 61523 ']' 00:06:09.209 02:01:17 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 61523 00:06:09.209 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (61523) - No such process 00:06:09.209 Process with pid 61523 is not found 00:06:09.209 02:01:17 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 61523 is not found' 00:06:09.209 02:01:17 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 61547 ]] 00:06:09.209 Process with pid 61547 is not found 00:06:09.209 02:01:17 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 61547 00:06:09.209 02:01:17 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 61547 ']' 00:06:09.209 02:01:17 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 61547 00:06:09.209 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (61547) - No such process 00:06:09.209 02:01:17 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 61547 is not found' 00:06:09.209 02:01:17 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:09.209 ************************************ 00:06:09.209 END TEST cpu_locks 00:06:09.209 ************************************ 00:06:09.209 00:06:09.209 real 0m44.638s 00:06:09.209 user 1m16.131s 00:06:09.209 sys 0m7.179s 00:06:09.209 02:01:17 event.cpu_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:09.209 02:01:17 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:09.209 02:01:17 event -- common/autotest_common.sh@1142 -- # return 0 00:06:09.209 ************************************ 00:06:09.209 END TEST event 00:06:09.209 ************************************ 00:06:09.209 00:06:09.209 real 1m14.925s 00:06:09.209 user 2m13.682s 00:06:09.209 sys 0m10.993s 00:06:09.209 02:01:17 event -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:09.209 02:01:17 event -- common/autotest_common.sh@10 -- # set +x 00:06:09.209 02:01:17 -- common/autotest_common.sh@1142 -- # return 0 00:06:09.209 02:01:17 -- spdk/autotest.sh@182 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:09.209 02:01:17 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:09.209 02:01:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:09.209 02:01:17 -- common/autotest_common.sh@10 -- # set +x 00:06:09.209 ************************************ 00:06:09.209 START TEST thread 00:06:09.209 ************************************ 00:06:09.209 02:01:17 thread -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:09.209 * Looking for test storage... 00:06:09.209 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:06:09.209 02:01:17 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:09.209 02:01:17 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:06:09.209 02:01:17 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:09.209 02:01:17 thread -- common/autotest_common.sh@10 -- # set +x 00:06:09.209 ************************************ 00:06:09.209 START TEST thread_poller_perf 00:06:09.209 ************************************ 00:06:09.209 02:01:17 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:09.209 [2024-07-23 02:01:17.830714] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:06:09.209 [2024-07-23 02:01:17.830882] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61729 ] 00:06:09.467 [2024-07-23 02:01:18.008989] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.725 [2024-07-23 02:01:18.259638] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.725 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:11.108 ====================================== 00:06:11.108 busy:2207448680 (cyc) 00:06:11.108 total_run_count: 381000 00:06:11.108 tsc_hz: 2200000000 (cyc) 00:06:11.108 ====================================== 00:06:11.108 poller_cost: 5793 (cyc), 2633 (nsec) 00:06:11.108 00:06:11.108 real 0m1.796s 00:06:11.108 user 0m1.561s 00:06:11.108 sys 0m0.125s 00:06:11.108 02:01:19 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:11.108 02:01:19 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:11.108 ************************************ 00:06:11.108 END TEST thread_poller_perf 00:06:11.108 ************************************ 00:06:11.108 02:01:19 thread -- common/autotest_common.sh@1142 -- # return 0 00:06:11.108 02:01:19 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:11.108 02:01:19 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:06:11.108 02:01:19 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:11.108 02:01:19 thread -- common/autotest_common.sh@10 -- # set +x 00:06:11.108 ************************************ 00:06:11.108 START TEST thread_poller_perf 00:06:11.108 ************************************ 00:06:11.108 02:01:19 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:11.108 [2024-07-23 02:01:19.684065] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:06:11.108 [2024-07-23 02:01:19.684404] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61766 ] 00:06:11.108 [2024-07-23 02:01:19.859031] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.367 [2024-07-23 02:01:20.102567] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.367 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:12.745 ====================================== 00:06:12.745 busy:2203750140 (cyc) 00:06:12.745 total_run_count: 4839000 00:06:12.745 tsc_hz: 2200000000 (cyc) 00:06:12.745 ====================================== 00:06:12.745 poller_cost: 455 (cyc), 206 (nsec) 00:06:12.745 00:06:12.745 real 0m1.779s 00:06:12.745 user 0m1.554s 00:06:12.745 sys 0m0.116s 00:06:12.745 02:01:21 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:12.745 02:01:21 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:12.745 ************************************ 00:06:12.745 END TEST thread_poller_perf 00:06:12.746 ************************************ 00:06:12.746 02:01:21 thread -- common/autotest_common.sh@1142 -- # return 0 00:06:12.746 02:01:21 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:12.746 ************************************ 00:06:12.746 END TEST thread 00:06:12.746 ************************************ 00:06:12.746 00:06:12.746 real 0m3.764s 00:06:12.746 user 0m3.186s 00:06:12.746 sys 0m0.350s 00:06:12.746 02:01:21 thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:12.746 02:01:21 thread -- common/autotest_common.sh@10 -- # set +x 00:06:12.746 02:01:21 -- common/autotest_common.sh@1142 -- # return 0 00:06:12.746 02:01:21 -- spdk/autotest.sh@183 -- # run_test accel /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:06:12.746 02:01:21 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:12.746 02:01:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:12.746 02:01:21 -- common/autotest_common.sh@10 -- # set +x 00:06:12.746 ************************************ 00:06:12.746 START TEST accel 00:06:12.746 ************************************ 00:06:12.746 02:01:21 accel -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:06:13.005 * Looking for test storage... 00:06:13.005 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:06:13.005 02:01:21 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:06:13.005 02:01:21 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:06:13.005 02:01:21 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:13.005 02:01:21 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=61847 00:06:13.005 02:01:21 accel -- accel/accel.sh@63 -- # waitforlisten 61847 00:06:13.005 02:01:21 accel -- common/autotest_common.sh@829 -- # '[' -z 61847 ']' 00:06:13.005 02:01:21 accel -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:13.005 02:01:21 accel -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:13.005 02:01:21 accel -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:13.005 02:01:21 accel -- accel/accel.sh@61 -- # build_accel_config 00:06:13.005 02:01:21 accel -- accel/accel.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:06:13.005 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:13.005 02:01:21 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:13.005 02:01:21 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:13.005 02:01:21 accel -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:13.005 02:01:21 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:13.005 02:01:21 accel -- common/autotest_common.sh@10 -- # set +x 00:06:13.005 02:01:21 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:13.005 02:01:21 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:13.005 02:01:21 accel -- accel/accel.sh@40 -- # local IFS=, 00:06:13.005 02:01:21 accel -- accel/accel.sh@41 -- # jq -r . 00:06:13.264 [2024-07-23 02:01:21.783542] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:06:13.264 [2024-07-23 02:01:21.783780] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61847 ] 00:06:13.264 [2024-07-23 02:01:21.953893] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.524 [2024-07-23 02:01:22.145945] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.106 02:01:22 accel -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:14.106 02:01:22 accel -- common/autotest_common.sh@862 -- # return 0 00:06:14.106 02:01:22 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:06:14.106 02:01:22 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:06:14.106 02:01:22 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:06:14.106 02:01:22 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:06:14.106 02:01:22 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:06:14.106 02:01:22 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:06:14.106 02:01:22 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:14.106 02:01:22 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:06:14.106 02:01:22 accel -- common/autotest_common.sh@10 -- # set +x 00:06:14.106 02:01:22 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:14.106 02:01:22 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:14.106 02:01:22 accel -- accel/accel.sh@72 -- # IFS== 00:06:14.106 02:01:22 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:14.106 02:01:22 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:14.106 02:01:22 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:14.106 02:01:22 accel -- accel/accel.sh@72 -- # IFS== 00:06:14.106 02:01:22 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:14.106 02:01:22 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:14.106 02:01:22 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:14.106 02:01:22 accel -- accel/accel.sh@72 -- # IFS== 00:06:14.106 02:01:22 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:14.106 02:01:22 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:14.106 02:01:22 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:14.106 02:01:22 accel -- accel/accel.sh@72 -- # IFS== 00:06:14.106 02:01:22 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:14.106 02:01:22 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:14.106 02:01:22 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:14.106 02:01:22 accel -- accel/accel.sh@72 -- # IFS== 00:06:14.106 02:01:22 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:14.106 02:01:22 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:14.106 02:01:22 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:14.380 02:01:22 accel -- accel/accel.sh@72 -- # IFS== 00:06:14.380 02:01:22 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:14.380 02:01:22 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:14.380 02:01:22 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:14.380 02:01:22 accel -- accel/accel.sh@72 -- # IFS== 00:06:14.380 02:01:22 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:14.380 02:01:22 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:14.380 02:01:22 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:14.380 02:01:22 accel -- accel/accel.sh@72 -- # IFS== 00:06:14.380 02:01:22 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:14.380 02:01:22 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:14.380 02:01:22 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:14.380 02:01:22 accel -- accel/accel.sh@72 -- # IFS== 00:06:14.380 02:01:22 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:14.380 02:01:22 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:14.380 02:01:22 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:14.380 02:01:22 accel -- accel/accel.sh@72 -- # IFS== 00:06:14.380 02:01:22 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:14.380 02:01:22 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:14.380 02:01:22 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:14.380 02:01:22 accel -- accel/accel.sh@72 -- # IFS== 00:06:14.380 02:01:22 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:14.380 02:01:22 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:14.380 02:01:22 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:14.380 02:01:22 accel -- accel/accel.sh@72 -- # IFS== 00:06:14.380 02:01:22 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:14.380 02:01:22 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:14.380 02:01:22 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:14.380 02:01:22 accel -- accel/accel.sh@72 -- # IFS== 00:06:14.380 02:01:22 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:14.380 02:01:22 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:14.380 02:01:22 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:14.380 02:01:22 accel -- accel/accel.sh@72 -- # IFS== 00:06:14.380 02:01:22 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:14.380 02:01:22 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:14.380 02:01:22 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:14.381 02:01:22 accel -- accel/accel.sh@72 -- # IFS== 00:06:14.381 02:01:22 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:14.381 02:01:22 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:14.381 02:01:22 accel -- accel/accel.sh@75 -- # killprocess 61847 00:06:14.381 02:01:22 accel -- common/autotest_common.sh@948 -- # '[' -z 61847 ']' 00:06:14.381 02:01:22 accel -- common/autotest_common.sh@952 -- # kill -0 61847 00:06:14.381 02:01:22 accel -- common/autotest_common.sh@953 -- # uname 00:06:14.381 02:01:22 accel -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:14.381 02:01:22 accel -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 61847 00:06:14.381 02:01:22 accel -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:14.381 02:01:22 accel -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:14.381 02:01:22 accel -- common/autotest_common.sh@966 -- # echo 'killing process with pid 61847' 00:06:14.381 killing process with pid 61847 00:06:14.381 02:01:22 accel -- common/autotest_common.sh@967 -- # kill 61847 00:06:14.381 02:01:22 accel -- common/autotest_common.sh@972 -- # wait 61847 00:06:16.297 02:01:24 accel -- accel/accel.sh@76 -- # trap - ERR 00:06:16.297 02:01:24 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:06:16.297 02:01:24 accel -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:16.297 02:01:24 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:16.297 02:01:24 accel -- common/autotest_common.sh@10 -- # set +x 00:06:16.297 02:01:24 accel.accel_help -- common/autotest_common.sh@1123 -- # accel_perf -h 00:06:16.297 02:01:24 accel.accel_help -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:06:16.297 02:01:24 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:06:16.297 02:01:24 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:16.297 02:01:24 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:16.297 02:01:24 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:16.297 02:01:24 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:16.297 02:01:24 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:16.297 02:01:24 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:06:16.297 02:01:24 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:06:16.297 02:01:24 accel.accel_help -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:16.297 02:01:24 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:06:16.297 02:01:24 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:16.297 02:01:24 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:06:16.297 02:01:24 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:16.297 02:01:24 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:16.297 02:01:24 accel -- common/autotest_common.sh@10 -- # set +x 00:06:16.297 ************************************ 00:06:16.297 START TEST accel_missing_filename 00:06:16.297 ************************************ 00:06:16.297 02:01:24 accel.accel_missing_filename -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress 00:06:16.297 02:01:24 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:06:16.297 02:01:24 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:06:16.297 02:01:24 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:16.297 02:01:24 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:16.297 02:01:24 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:16.297 02:01:24 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:16.297 02:01:24 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:06:16.297 02:01:24 accel.accel_missing_filename -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:06:16.297 02:01:24 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:06:16.297 02:01:24 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:16.297 02:01:24 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:16.297 02:01:24 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:16.297 02:01:24 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:16.297 02:01:24 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:16.297 02:01:24 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:06:16.297 02:01:24 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:06:16.297 [2024-07-23 02:01:24.944331] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:06:16.297 [2024-07-23 02:01:24.944547] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61917 ] 00:06:16.556 [2024-07-23 02:01:25.128133] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.816 [2024-07-23 02:01:25.395069] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.816 [2024-07-23 02:01:25.566773] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:17.384 [2024-07-23 02:01:25.960217] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:06:17.644 A filename is required. 00:06:17.644 02:01:26 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:06:17.644 02:01:26 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:17.644 02:01:26 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:06:17.644 02:01:26 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:06:17.644 02:01:26 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:06:17.644 02:01:26 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:17.644 00:06:17.644 real 0m1.415s 00:06:17.644 user 0m1.151s 00:06:17.644 sys 0m0.212s 00:06:17.644 02:01:26 accel.accel_missing_filename -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:17.644 02:01:26 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:06:17.644 ************************************ 00:06:17.644 END TEST accel_missing_filename 00:06:17.644 ************************************ 00:06:17.644 02:01:26 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:17.644 02:01:26 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:17.644 02:01:26 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:06:17.644 02:01:26 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:17.644 02:01:26 accel -- common/autotest_common.sh@10 -- # set +x 00:06:17.644 ************************************ 00:06:17.644 START TEST accel_compress_verify 00:06:17.644 ************************************ 00:06:17.644 02:01:26 accel.accel_compress_verify -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:17.644 02:01:26 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:06:17.644 02:01:26 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:17.644 02:01:26 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:17.644 02:01:26 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:17.644 02:01:26 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:17.644 02:01:26 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:17.644 02:01:26 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:17.644 02:01:26 accel.accel_compress_verify -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:17.644 02:01:26 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:06:17.644 02:01:26 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:17.644 02:01:26 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:17.644 02:01:26 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:17.644 02:01:26 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:17.644 02:01:26 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:17.644 02:01:26 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:06:17.644 02:01:26 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:06:17.644 [2024-07-23 02:01:26.386713] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:06:17.644 [2024-07-23 02:01:26.386849] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61953 ] 00:06:17.903 [2024-07-23 02:01:26.546566] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.163 [2024-07-23 02:01:26.733753] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.163 [2024-07-23 02:01:26.903529] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:18.731 [2024-07-23 02:01:27.302593] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:06:18.990 00:06:18.990 Compression does not support the verify option, aborting. 00:06:18.990 02:01:27 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:06:18.990 02:01:27 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:18.990 02:01:27 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:06:18.990 02:01:27 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:06:18.991 02:01:27 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:06:18.991 02:01:27 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:18.991 00:06:18.991 real 0m1.278s 00:06:18.991 user 0m1.043s 00:06:18.991 sys 0m0.177s 00:06:18.991 ************************************ 00:06:18.991 END TEST accel_compress_verify 00:06:18.991 ************************************ 00:06:18.991 02:01:27 accel.accel_compress_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:18.991 02:01:27 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:06:18.991 02:01:27 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:18.991 02:01:27 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:06:18.991 02:01:27 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:18.991 02:01:27 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:18.991 02:01:27 accel -- common/autotest_common.sh@10 -- # set +x 00:06:18.991 ************************************ 00:06:18.991 START TEST accel_wrong_workload 00:06:18.991 ************************************ 00:06:18.991 02:01:27 accel.accel_wrong_workload -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w foobar 00:06:18.991 02:01:27 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:06:18.991 02:01:27 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:06:18.991 02:01:27 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:18.991 02:01:27 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:18.991 02:01:27 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:18.991 02:01:27 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:18.991 02:01:27 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:06:18.991 02:01:27 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:06:18.991 02:01:27 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:06:18.991 02:01:27 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:18.991 02:01:27 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:18.991 02:01:27 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:18.991 02:01:27 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:18.991 02:01:27 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:18.991 02:01:27 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:06:18.991 02:01:27 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:06:18.991 Unsupported workload type: foobar 00:06:18.991 [2024-07-23 02:01:27.736799] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:06:18.991 accel_perf options: 00:06:18.991 [-h help message] 00:06:18.991 [-q queue depth per core] 00:06:18.991 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:18.991 [-T number of threads per core 00:06:18.991 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:18.991 [-t time in seconds] 00:06:18.991 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:18.991 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:06:18.991 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:18.991 [-l for compress/decompress workloads, name of uncompressed input file 00:06:18.991 [-S for crc32c workload, use this seed value (default 0) 00:06:18.991 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:18.991 [-f for fill workload, use this BYTE value (default 255) 00:06:18.991 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:18.991 [-y verify result if this switch is on] 00:06:18.991 [-a tasks to allocate per core (default: same value as -q)] 00:06:18.991 Can be used to spread operations across a wider range of memory. 00:06:18.991 02:01:27 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:06:18.991 02:01:27 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:18.991 02:01:27 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:18.991 ************************************ 00:06:18.991 END TEST accel_wrong_workload 00:06:18.991 ************************************ 00:06:18.991 02:01:27 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:18.991 00:06:18.991 real 0m0.089s 00:06:18.991 user 0m0.090s 00:06:18.991 sys 0m0.049s 00:06:18.991 02:01:27 accel.accel_wrong_workload -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:18.991 02:01:27 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:06:19.251 02:01:27 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:19.251 02:01:27 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:06:19.251 02:01:27 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:06:19.251 02:01:27 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:19.251 02:01:27 accel -- common/autotest_common.sh@10 -- # set +x 00:06:19.251 ************************************ 00:06:19.251 START TEST accel_negative_buffers 00:06:19.251 ************************************ 00:06:19.251 02:01:27 accel.accel_negative_buffers -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:06:19.251 02:01:27 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:06:19.251 02:01:27 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:06:19.251 02:01:27 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:19.251 02:01:27 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:19.251 02:01:27 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:19.251 02:01:27 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:19.251 02:01:27 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:06:19.251 02:01:27 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:06:19.251 02:01:27 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:06:19.251 02:01:27 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:19.251 02:01:27 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:19.251 02:01:27 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:19.251 02:01:27 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:19.251 02:01:27 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:19.251 02:01:27 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:06:19.251 02:01:27 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:06:19.251 -x option must be non-negative. 00:06:19.251 [2024-07-23 02:01:27.875235] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:06:19.251 accel_perf options: 00:06:19.251 [-h help message] 00:06:19.251 [-q queue depth per core] 00:06:19.251 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:19.251 [-T number of threads per core 00:06:19.251 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:19.251 [-t time in seconds] 00:06:19.251 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:19.251 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:06:19.251 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:19.251 [-l for compress/decompress workloads, name of uncompressed input file 00:06:19.251 [-S for crc32c workload, use this seed value (default 0) 00:06:19.251 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:19.251 [-f for fill workload, use this BYTE value (default 255) 00:06:19.251 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:19.251 [-y verify result if this switch is on] 00:06:19.251 [-a tasks to allocate per core (default: same value as -q)] 00:06:19.251 Can be used to spread operations across a wider range of memory. 00:06:19.251 02:01:27 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:06:19.251 02:01:27 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:19.251 02:01:27 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:19.251 02:01:27 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:19.251 00:06:19.251 real 0m0.087s 00:06:19.251 user 0m0.104s 00:06:19.251 sys 0m0.042s 00:06:19.251 02:01:27 accel.accel_negative_buffers -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:19.251 ************************************ 00:06:19.251 END TEST accel_negative_buffers 00:06:19.251 ************************************ 00:06:19.251 02:01:27 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:06:19.251 02:01:27 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:19.251 02:01:27 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:06:19.251 02:01:27 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:19.251 02:01:27 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:19.251 02:01:27 accel -- common/autotest_common.sh@10 -- # set +x 00:06:19.251 ************************************ 00:06:19.251 START TEST accel_crc32c 00:06:19.251 ************************************ 00:06:19.251 02:01:27 accel.accel_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -S 32 -y 00:06:19.251 02:01:27 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:06:19.251 02:01:27 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:06:19.251 02:01:27 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:19.251 02:01:27 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:19.251 02:01:27 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:19.251 02:01:27 accel.accel_crc32c -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:19.251 02:01:27 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:06:19.251 02:01:27 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:19.251 02:01:27 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:19.251 02:01:27 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:19.251 02:01:27 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:19.251 02:01:27 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:19.251 02:01:27 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:06:19.251 02:01:27 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:06:19.251 [2024-07-23 02:01:28.015737] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:06:19.251 [2024-07-23 02:01:28.015897] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62026 ] 00:06:19.510 [2024-07-23 02:01:28.189691] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.770 [2024-07-23 02:01:28.390389] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.029 02:01:28 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:20.029 02:01:28 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:20.029 02:01:28 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:20.029 02:01:28 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:20.029 02:01:28 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:20.029 02:01:28 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:20.029 02:01:28 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:20.029 02:01:28 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:20.029 02:01:28 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:06:20.029 02:01:28 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:20.029 02:01:28 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:20.029 02:01:28 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:20.029 02:01:28 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:20.029 02:01:28 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:20.029 02:01:28 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:20.029 02:01:28 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:20.029 02:01:28 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:20.029 02:01:28 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:20.029 02:01:28 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:20.029 02:01:28 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:20.029 02:01:28 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:06:20.029 02:01:28 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:20.029 02:01:28 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:20.029 02:01:28 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:20.029 02:01:28 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:20.029 02:01:28 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:20.029 02:01:28 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:20.029 02:01:28 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:20.029 02:01:28 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:20.029 02:01:28 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:20.029 02:01:28 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:20.029 02:01:28 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:20.029 02:01:28 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:20.029 02:01:28 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:20.029 02:01:28 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:20.029 02:01:28 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:20.029 02:01:28 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:20.029 02:01:28 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:06:20.029 02:01:28 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:20.029 02:01:28 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:06:20.029 02:01:28 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:20.029 02:01:28 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:20.029 02:01:28 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:20.029 02:01:28 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:20.029 02:01:28 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:20.029 02:01:28 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:20.029 02:01:28 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:20.029 02:01:28 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:20.029 02:01:28 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:20.029 02:01:28 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:20.029 02:01:28 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:06:20.029 02:01:28 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:20.029 02:01:28 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:20.029 02:01:28 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:20.029 02:01:28 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:06:20.029 02:01:28 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:20.029 02:01:28 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:20.029 02:01:28 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:20.029 02:01:28 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:06:20.029 02:01:28 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:20.029 02:01:28 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:20.029 02:01:28 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:20.029 02:01:28 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:20.029 02:01:28 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:20.029 02:01:28 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:20.029 02:01:28 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:20.029 02:01:28 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:20.029 02:01:28 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:20.029 02:01:28 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:20.029 02:01:28 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:21.933 02:01:30 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:21.933 02:01:30 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:21.933 02:01:30 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:21.933 02:01:30 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:21.933 02:01:30 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:21.933 02:01:30 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:21.933 02:01:30 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:21.933 02:01:30 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:21.933 02:01:30 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:21.933 02:01:30 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:21.933 02:01:30 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:21.933 02:01:30 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:21.933 02:01:30 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:21.933 02:01:30 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:21.933 02:01:30 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:21.933 02:01:30 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:21.933 02:01:30 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:21.933 02:01:30 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:21.933 02:01:30 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:21.933 02:01:30 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:21.933 02:01:30 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:21.933 02:01:30 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:21.933 02:01:30 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:21.933 02:01:30 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:21.933 02:01:30 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:21.933 02:01:30 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:21.933 02:01:30 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:21.933 00:06:21.933 real 0m2.322s 00:06:21.933 user 0m2.034s 00:06:21.933 sys 0m0.192s 00:06:21.933 02:01:30 accel.accel_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:21.933 ************************************ 00:06:21.933 END TEST accel_crc32c 00:06:21.933 ************************************ 00:06:21.933 02:01:30 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:06:21.933 02:01:30 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:21.933 02:01:30 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:06:21.933 02:01:30 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:21.933 02:01:30 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:21.933 02:01:30 accel -- common/autotest_common.sh@10 -- # set +x 00:06:21.933 ************************************ 00:06:21.933 START TEST accel_crc32c_C2 00:06:21.933 ************************************ 00:06:21.933 02:01:30 accel.accel_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -y -C 2 00:06:21.933 02:01:30 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:06:21.933 02:01:30 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:06:21.933 02:01:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:21.933 02:01:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:21.933 02:01:30 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:21.933 02:01:30 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:21.933 02:01:30 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:06:21.933 02:01:30 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:21.933 02:01:30 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:21.933 02:01:30 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:21.933 02:01:30 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:21.933 02:01:30 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:21.933 02:01:30 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:06:21.933 02:01:30 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:06:21.933 [2024-07-23 02:01:30.385873] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:06:21.933 [2024-07-23 02:01:30.386037] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62068 ] 00:06:21.933 [2024-07-23 02:01:30.561049] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.192 [2024-07-23 02:01:30.758764] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.192 02:01:30 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:22.192 02:01:30 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.192 02:01:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:22.192 02:01:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:22.192 02:01:30 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:22.192 02:01:30 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.192 02:01:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:22.192 02:01:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:22.192 02:01:30 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:06:22.192 02:01:30 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.192 02:01:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:22.192 02:01:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:22.192 02:01:30 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:22.192 02:01:30 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.192 02:01:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:22.192 02:01:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:22.192 02:01:30 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:22.192 02:01:30 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.192 02:01:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:22.192 02:01:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:22.192 02:01:30 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:06:22.192 02:01:30 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.192 02:01:30 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:22.192 02:01:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:22.192 02:01:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:22.192 02:01:30 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:06:22.192 02:01:30 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.192 02:01:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:22.192 02:01:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:22.192 02:01:30 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:22.192 02:01:30 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.192 02:01:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:22.192 02:01:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:22.192 02:01:30 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:22.192 02:01:30 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.192 02:01:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:22.192 02:01:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:22.192 02:01:30 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:06:22.192 02:01:30 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.192 02:01:30 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:06:22.192 02:01:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:22.192 02:01:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:22.192 02:01:30 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:22.192 02:01:30 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.192 02:01:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:22.192 02:01:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:22.192 02:01:30 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:22.193 02:01:30 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.193 02:01:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:22.193 02:01:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:22.193 02:01:30 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:06:22.193 02:01:30 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.193 02:01:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:22.193 02:01:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:22.193 02:01:30 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:22.193 02:01:30 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.193 02:01:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:22.193 02:01:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:22.193 02:01:30 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:06:22.193 02:01:30 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.193 02:01:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:22.193 02:01:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:22.193 02:01:30 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:22.193 02:01:30 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.193 02:01:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:22.193 02:01:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:22.193 02:01:30 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:22.193 02:01:30 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.193 02:01:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:22.193 02:01:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:24.097 02:01:32 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:24.097 02:01:32 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.097 02:01:32 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:24.097 02:01:32 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:24.097 02:01:32 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:24.097 02:01:32 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.097 02:01:32 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:24.097 02:01:32 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:24.097 02:01:32 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:24.097 02:01:32 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.097 02:01:32 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:24.097 02:01:32 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:24.097 02:01:32 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:24.097 02:01:32 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.097 02:01:32 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:24.097 02:01:32 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:24.097 02:01:32 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:24.097 02:01:32 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.097 02:01:32 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:24.097 02:01:32 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:24.097 02:01:32 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:24.097 02:01:32 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.097 02:01:32 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:24.097 02:01:32 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:24.097 02:01:32 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:24.097 02:01:32 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:24.097 02:01:32 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:24.097 00:06:24.097 real 0m2.322s 00:06:24.097 user 0m2.039s 00:06:24.097 sys 0m0.190s 00:06:24.097 02:01:32 accel.accel_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:24.097 02:01:32 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:06:24.097 ************************************ 00:06:24.097 END TEST accel_crc32c_C2 00:06:24.097 ************************************ 00:06:24.097 02:01:32 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:24.097 02:01:32 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:06:24.097 02:01:32 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:24.097 02:01:32 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:24.097 02:01:32 accel -- common/autotest_common.sh@10 -- # set +x 00:06:24.097 ************************************ 00:06:24.097 START TEST accel_copy 00:06:24.097 ************************************ 00:06:24.097 02:01:32 accel.accel_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy -y 00:06:24.097 02:01:32 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:06:24.097 02:01:32 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:06:24.097 02:01:32 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:24.097 02:01:32 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:24.097 02:01:32 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:06:24.097 02:01:32 accel.accel_copy -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:24.097 02:01:32 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:06:24.097 02:01:32 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:24.097 02:01:32 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:24.097 02:01:32 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:24.097 02:01:32 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:24.097 02:01:32 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:24.097 02:01:32 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:06:24.097 02:01:32 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:06:24.097 [2024-07-23 02:01:32.764868] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:06:24.097 [2024-07-23 02:01:32.765034] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62119 ] 00:06:24.356 [2024-07-23 02:01:32.939736] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.356 [2024-07-23 02:01:33.132295] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.616 02:01:33 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:24.616 02:01:33 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:24.616 02:01:33 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:24.616 02:01:33 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:24.616 02:01:33 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:24.616 02:01:33 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:24.616 02:01:33 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:24.616 02:01:33 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:24.616 02:01:33 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:06:24.616 02:01:33 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:24.616 02:01:33 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:24.616 02:01:33 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:24.616 02:01:33 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:24.616 02:01:33 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:24.616 02:01:33 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:24.616 02:01:33 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:24.616 02:01:33 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:24.616 02:01:33 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:24.616 02:01:33 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:24.616 02:01:33 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:24.616 02:01:33 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:06:24.616 02:01:33 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:24.616 02:01:33 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:06:24.616 02:01:33 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:24.616 02:01:33 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:24.616 02:01:33 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:24.616 02:01:33 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:24.616 02:01:33 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:24.616 02:01:33 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:24.616 02:01:33 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:24.616 02:01:33 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:24.616 02:01:33 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:24.616 02:01:33 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:24.616 02:01:33 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:06:24.616 02:01:33 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:24.616 02:01:33 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:06:24.616 02:01:33 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:24.616 02:01:33 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:24.616 02:01:33 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:06:24.616 02:01:33 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:24.616 02:01:33 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:24.616 02:01:33 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:24.616 02:01:33 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:06:24.616 02:01:33 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:24.616 02:01:33 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:24.616 02:01:33 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:24.616 02:01:33 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:06:24.616 02:01:33 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:24.616 02:01:33 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:24.616 02:01:33 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:24.616 02:01:33 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:06:24.616 02:01:33 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:24.616 02:01:33 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:24.616 02:01:33 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:24.616 02:01:33 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:06:24.616 02:01:33 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:24.616 02:01:33 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:24.616 02:01:33 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:24.616 02:01:33 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:24.616 02:01:33 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:24.616 02:01:33 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:24.616 02:01:33 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:24.616 02:01:33 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:24.616 02:01:33 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:24.616 02:01:33 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:24.616 02:01:33 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:26.521 02:01:35 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:26.521 02:01:35 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:26.521 02:01:35 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:26.521 02:01:35 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:26.521 02:01:35 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:26.521 02:01:35 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:26.521 02:01:35 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:26.521 02:01:35 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:26.521 02:01:35 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:26.521 02:01:35 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:26.521 02:01:35 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:26.521 02:01:35 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:26.521 02:01:35 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:26.521 02:01:35 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:26.521 02:01:35 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:26.521 02:01:35 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:26.521 02:01:35 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:26.521 02:01:35 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:26.521 02:01:35 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:26.521 02:01:35 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:26.521 02:01:35 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:26.521 02:01:35 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:26.521 02:01:35 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:26.521 02:01:35 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:26.521 02:01:35 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:26.521 02:01:35 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:06:26.521 02:01:35 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:26.521 00:06:26.521 real 0m2.320s 00:06:26.521 user 0m2.011s 00:06:26.521 sys 0m0.211s 00:06:26.521 02:01:35 accel.accel_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:26.521 02:01:35 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:06:26.521 ************************************ 00:06:26.521 END TEST accel_copy 00:06:26.521 ************************************ 00:06:26.521 02:01:35 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:26.521 02:01:35 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:26.521 02:01:35 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:06:26.521 02:01:35 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:26.521 02:01:35 accel -- common/autotest_common.sh@10 -- # set +x 00:06:26.521 ************************************ 00:06:26.521 START TEST accel_fill 00:06:26.521 ************************************ 00:06:26.521 02:01:35 accel.accel_fill -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:26.521 02:01:35 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:06:26.521 02:01:35 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:06:26.521 02:01:35 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:26.521 02:01:35 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:26.521 02:01:35 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:26.521 02:01:35 accel.accel_fill -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:26.521 02:01:35 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:06:26.521 02:01:35 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:26.521 02:01:35 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:26.521 02:01:35 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:26.521 02:01:35 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:26.521 02:01:35 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:26.521 02:01:35 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:06:26.521 02:01:35 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:06:26.521 [2024-07-23 02:01:35.124529] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:06:26.521 [2024-07-23 02:01:35.124650] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62160 ] 00:06:26.521 [2024-07-23 02:01:35.285054] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.781 [2024-07-23 02:01:35.511149] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.040 02:01:35 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:27.040 02:01:35 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:27.040 02:01:35 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:27.040 02:01:35 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:27.040 02:01:35 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:27.040 02:01:35 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:27.040 02:01:35 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:27.040 02:01:35 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:27.040 02:01:35 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:06:27.040 02:01:35 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:27.040 02:01:35 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:27.040 02:01:35 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:27.040 02:01:35 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:27.040 02:01:35 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:27.040 02:01:35 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:27.040 02:01:35 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:27.040 02:01:35 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:27.040 02:01:35 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:27.040 02:01:35 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:27.040 02:01:35 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:27.041 02:01:35 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:06:27.041 02:01:35 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:27.041 02:01:35 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:06:27.041 02:01:35 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:27.041 02:01:35 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:27.041 02:01:35 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:06:27.041 02:01:35 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:27.041 02:01:35 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:27.041 02:01:35 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:27.041 02:01:35 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:27.041 02:01:35 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:27.041 02:01:35 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:27.041 02:01:35 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:27.041 02:01:35 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:27.041 02:01:35 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:27.041 02:01:35 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:27.041 02:01:35 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:27.041 02:01:35 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:06:27.041 02:01:35 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:27.041 02:01:35 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:06:27.041 02:01:35 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:27.041 02:01:35 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:27.041 02:01:35 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:06:27.041 02:01:35 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:27.041 02:01:35 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:27.041 02:01:35 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:27.041 02:01:35 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:06:27.041 02:01:35 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:27.041 02:01:35 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:27.041 02:01:35 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:27.041 02:01:35 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:06:27.041 02:01:35 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:27.041 02:01:35 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:27.041 02:01:35 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:27.041 02:01:35 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:06:27.041 02:01:35 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:27.041 02:01:35 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:27.041 02:01:35 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:27.041 02:01:35 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:06:27.041 02:01:35 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:27.041 02:01:35 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:27.041 02:01:35 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:27.041 02:01:35 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:27.041 02:01:35 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:27.041 02:01:35 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:27.041 02:01:35 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:27.041 02:01:35 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:27.041 02:01:35 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:27.041 02:01:35 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:27.041 02:01:35 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:28.947 02:01:37 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:28.947 02:01:37 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:28.947 02:01:37 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:28.947 02:01:37 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:28.947 02:01:37 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:28.947 02:01:37 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:28.947 02:01:37 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:28.947 02:01:37 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:28.947 02:01:37 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:28.947 02:01:37 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:28.947 02:01:37 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:28.947 02:01:37 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:28.947 02:01:37 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:28.947 02:01:37 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:28.947 02:01:37 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:28.947 02:01:37 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:28.947 02:01:37 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:28.948 02:01:37 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:28.948 02:01:37 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:28.948 02:01:37 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:28.948 02:01:37 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:28.948 02:01:37 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:28.948 02:01:37 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:28.948 02:01:37 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:28.948 02:01:37 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:28.948 02:01:37 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:06:28.948 02:01:37 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:28.948 00:06:28.948 real 0m2.331s 00:06:28.948 user 0m0.014s 00:06:28.948 sys 0m0.005s 00:06:28.948 02:01:37 accel.accel_fill -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:28.948 02:01:37 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:06:28.948 ************************************ 00:06:28.948 END TEST accel_fill 00:06:28.948 ************************************ 00:06:28.948 02:01:37 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:28.948 02:01:37 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:06:28.948 02:01:37 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:28.948 02:01:37 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:28.948 02:01:37 accel -- common/autotest_common.sh@10 -- # set +x 00:06:28.948 ************************************ 00:06:28.948 START TEST accel_copy_crc32c 00:06:28.948 ************************************ 00:06:28.948 02:01:37 accel.accel_copy_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y 00:06:28.948 02:01:37 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:06:28.948 02:01:37 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:06:28.948 02:01:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:28.948 02:01:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:28.948 02:01:37 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:06:28.948 02:01:37 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:06:28.948 02:01:37 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:06:28.948 02:01:37 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:28.948 02:01:37 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:28.948 02:01:37 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:28.948 02:01:37 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:28.948 02:01:37 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:28.948 02:01:37 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:06:28.948 02:01:37 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:06:28.948 [2024-07-23 02:01:37.510942] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:06:28.948 [2024-07-23 02:01:37.511063] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62201 ] 00:06:28.948 [2024-07-23 02:01:37.666478] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.207 [2024-07-23 02:01:37.863931] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.467 02:01:38 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:29.467 02:01:38 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:29.467 02:01:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:29.467 02:01:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:29.467 02:01:38 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:29.467 02:01:38 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:29.467 02:01:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:29.467 02:01:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:29.467 02:01:38 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:06:29.467 02:01:38 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:29.467 02:01:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:29.467 02:01:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:29.467 02:01:38 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:29.467 02:01:38 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:29.467 02:01:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:29.467 02:01:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:29.467 02:01:38 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:29.467 02:01:38 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:29.467 02:01:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:29.467 02:01:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:29.467 02:01:38 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:06:29.467 02:01:38 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:29.467 02:01:38 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:06:29.467 02:01:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:29.467 02:01:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:29.467 02:01:38 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:06:29.467 02:01:38 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:29.467 02:01:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:29.467 02:01:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:29.467 02:01:38 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:29.467 02:01:38 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:29.467 02:01:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:29.467 02:01:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:29.467 02:01:38 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:29.468 02:01:38 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:29.468 02:01:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:29.468 02:01:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:29.468 02:01:38 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:29.468 02:01:38 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:29.468 02:01:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:29.468 02:01:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:29.468 02:01:38 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:06:29.468 02:01:38 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:29.468 02:01:38 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:06:29.468 02:01:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:29.468 02:01:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:29.468 02:01:38 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:06:29.468 02:01:38 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:29.468 02:01:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:29.468 02:01:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:29.468 02:01:38 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:06:29.468 02:01:38 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:29.468 02:01:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:29.468 02:01:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:29.468 02:01:38 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:06:29.468 02:01:38 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:29.468 02:01:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:29.468 02:01:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:29.468 02:01:38 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:06:29.468 02:01:38 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:29.468 02:01:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:29.468 02:01:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:29.468 02:01:38 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:06:29.468 02:01:38 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:29.468 02:01:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:29.468 02:01:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:29.468 02:01:38 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:29.468 02:01:38 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:29.468 02:01:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:29.468 02:01:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:29.468 02:01:38 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:29.468 02:01:38 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:29.468 02:01:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:29.468 02:01:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:31.373 02:01:39 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:31.373 02:01:39 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:31.373 02:01:39 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:31.373 02:01:39 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:31.373 02:01:39 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:31.373 02:01:39 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:31.373 02:01:39 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:31.373 02:01:39 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:31.374 02:01:39 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:31.374 02:01:39 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:31.374 02:01:39 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:31.374 02:01:39 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:31.374 02:01:39 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:31.374 02:01:39 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:31.374 02:01:39 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:31.374 02:01:39 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:31.374 02:01:39 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:31.374 02:01:39 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:31.374 02:01:39 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:31.374 02:01:39 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:31.374 02:01:39 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:31.374 02:01:39 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:31.374 02:01:39 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:31.374 02:01:39 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:31.374 02:01:39 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:31.374 02:01:39 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:06:31.374 ************************************ 00:06:31.374 END TEST accel_copy_crc32c 00:06:31.374 ************************************ 00:06:31.374 02:01:39 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:31.374 00:06:31.374 real 0m2.294s 00:06:31.374 user 0m2.018s 00:06:31.374 sys 0m0.183s 00:06:31.374 02:01:39 accel.accel_copy_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:31.374 02:01:39 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:06:31.374 02:01:39 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:31.374 02:01:39 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:06:31.374 02:01:39 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:31.374 02:01:39 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:31.374 02:01:39 accel -- common/autotest_common.sh@10 -- # set +x 00:06:31.374 ************************************ 00:06:31.374 START TEST accel_copy_crc32c_C2 00:06:31.374 ************************************ 00:06:31.374 02:01:39 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:06:31.374 02:01:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:06:31.374 02:01:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:06:31.374 02:01:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:31.374 02:01:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:31.374 02:01:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:06:31.374 02:01:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:06:31.374 02:01:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:06:31.374 02:01:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:31.374 02:01:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:31.374 02:01:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:31.374 02:01:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:31.374 02:01:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:31.374 02:01:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:06:31.374 02:01:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:06:31.374 [2024-07-23 02:01:39.863070] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:06:31.374 [2024-07-23 02:01:39.863237] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62248 ] 00:06:31.374 [2024-07-23 02:01:40.040648] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.633 [2024-07-23 02:01:40.269055] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.892 02:01:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:31.892 02:01:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.892 02:01:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:31.893 02:01:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:31.893 02:01:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:31.893 02:01:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.893 02:01:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:31.893 02:01:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:31.893 02:01:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:06:31.893 02:01:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.893 02:01:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:31.893 02:01:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:31.893 02:01:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:31.893 02:01:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.893 02:01:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:31.893 02:01:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:31.893 02:01:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:31.893 02:01:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.893 02:01:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:31.893 02:01:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:31.893 02:01:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:06:31.893 02:01:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.893 02:01:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:06:31.893 02:01:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:31.893 02:01:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:31.893 02:01:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:06:31.893 02:01:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.893 02:01:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:31.893 02:01:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:31.893 02:01:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:31.893 02:01:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.893 02:01:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:31.893 02:01:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:31.893 02:01:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:06:31.893 02:01:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.893 02:01:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:31.893 02:01:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:31.893 02:01:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:31.893 02:01:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.893 02:01:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:31.893 02:01:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:31.893 02:01:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:06:31.893 02:01:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.893 02:01:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:06:31.893 02:01:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:31.893 02:01:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:31.893 02:01:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:31.893 02:01:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.893 02:01:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:31.893 02:01:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:31.893 02:01:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:31.893 02:01:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.893 02:01:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:31.893 02:01:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:31.893 02:01:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:06:31.893 02:01:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.893 02:01:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:31.893 02:01:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:31.893 02:01:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:31.893 02:01:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.893 02:01:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:31.893 02:01:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:31.893 02:01:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:06:31.893 02:01:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.893 02:01:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:31.893 02:01:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:31.893 02:01:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:31.893 02:01:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.893 02:01:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:31.893 02:01:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:31.893 02:01:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:31.893 02:01:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.893 02:01:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:31.893 02:01:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:33.799 02:01:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:33.799 02:01:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.799 02:01:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:33.799 02:01:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:33.799 02:01:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:33.799 02:01:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.799 02:01:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:33.799 02:01:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:33.799 02:01:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:33.799 02:01:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.799 02:01:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:33.799 02:01:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:33.799 02:01:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:33.799 02:01:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.799 02:01:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:33.799 02:01:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:33.799 02:01:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:33.799 02:01:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.799 02:01:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:33.799 02:01:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:33.799 02:01:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:33.799 02:01:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.799 02:01:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:33.799 02:01:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:33.799 02:01:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:33.799 02:01:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:06:33.799 02:01:42 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:33.799 00:06:33.799 real 0m2.356s 00:06:33.799 user 0m2.065s 00:06:33.799 sys 0m0.194s 00:06:33.799 ************************************ 00:06:33.799 END TEST accel_copy_crc32c_C2 00:06:33.799 ************************************ 00:06:33.799 02:01:42 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:33.799 02:01:42 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:06:33.799 02:01:42 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:33.799 02:01:42 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:06:33.799 02:01:42 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:33.799 02:01:42 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:33.799 02:01:42 accel -- common/autotest_common.sh@10 -- # set +x 00:06:33.799 ************************************ 00:06:33.799 START TEST accel_dualcast 00:06:33.799 ************************************ 00:06:33.799 02:01:42 accel.accel_dualcast -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dualcast -y 00:06:33.799 02:01:42 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:06:33.799 02:01:42 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:06:33.799 02:01:42 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:33.799 02:01:42 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:33.799 02:01:42 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:06:33.799 02:01:42 accel.accel_dualcast -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:06:33.799 02:01:42 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:06:33.799 02:01:42 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:33.799 02:01:42 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:33.799 02:01:42 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:33.799 02:01:42 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:33.799 02:01:42 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:33.799 02:01:42 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:06:33.799 02:01:42 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:06:33.799 [2024-07-23 02:01:42.270523] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:06:33.799 [2024-07-23 02:01:42.271113] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62294 ] 00:06:33.799 [2024-07-23 02:01:42.439169] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.059 [2024-07-23 02:01:42.632981] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.059 02:01:42 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:34.059 02:01:42 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:34.059 02:01:42 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:34.059 02:01:42 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:34.059 02:01:42 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:34.059 02:01:42 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:34.059 02:01:42 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:34.059 02:01:42 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:34.059 02:01:42 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:06:34.059 02:01:42 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:34.059 02:01:42 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:34.059 02:01:42 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:34.059 02:01:42 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:34.059 02:01:42 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:34.059 02:01:42 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:34.059 02:01:42 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:34.059 02:01:42 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:34.059 02:01:42 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:34.059 02:01:42 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:34.059 02:01:42 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:34.059 02:01:42 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:06:34.059 02:01:42 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:34.059 02:01:42 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:06:34.059 02:01:42 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:34.059 02:01:42 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:34.059 02:01:42 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:34.059 02:01:42 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:34.059 02:01:42 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:34.059 02:01:42 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:34.059 02:01:42 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:34.059 02:01:42 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:34.059 02:01:42 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:34.059 02:01:42 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:34.059 02:01:42 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:06:34.059 02:01:42 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:34.059 02:01:42 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:06:34.059 02:01:42 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:34.059 02:01:42 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:34.059 02:01:42 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:06:34.059 02:01:42 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:34.059 02:01:42 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:34.059 02:01:42 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:34.059 02:01:42 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:06:34.059 02:01:42 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:34.059 02:01:42 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:34.059 02:01:42 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:34.059 02:01:42 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:06:34.059 02:01:42 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:34.059 02:01:42 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:34.059 02:01:42 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:34.059 02:01:42 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:06:34.059 02:01:42 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:34.059 02:01:42 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:34.059 02:01:42 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:34.059 02:01:42 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:06:34.059 02:01:42 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:34.059 02:01:42 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:34.059 02:01:42 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:34.059 02:01:42 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:34.059 02:01:42 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:34.059 02:01:42 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:34.059 02:01:42 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:34.059 02:01:42 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:34.059 02:01:42 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:34.059 02:01:42 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:34.059 02:01:42 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:36.035 02:01:44 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:36.035 02:01:44 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:36.035 02:01:44 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:36.035 02:01:44 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:36.035 02:01:44 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:36.035 02:01:44 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:36.035 02:01:44 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:36.035 02:01:44 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:36.035 02:01:44 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:36.035 02:01:44 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:36.035 02:01:44 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:36.035 02:01:44 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:36.035 02:01:44 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:36.035 02:01:44 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:36.035 02:01:44 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:36.035 02:01:44 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:36.035 02:01:44 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:36.035 02:01:44 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:36.035 02:01:44 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:36.035 02:01:44 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:36.035 02:01:44 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:36.035 02:01:44 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:36.035 02:01:44 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:36.035 02:01:44 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:36.035 02:01:44 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:36.035 02:01:44 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:06:36.035 02:01:44 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:36.035 00:06:36.035 real 0m2.315s 00:06:36.035 user 0m2.018s 00:06:36.035 sys 0m0.200s 00:06:36.035 ************************************ 00:06:36.035 END TEST accel_dualcast 00:06:36.035 ************************************ 00:06:36.035 02:01:44 accel.accel_dualcast -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:36.036 02:01:44 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:06:36.036 02:01:44 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:36.036 02:01:44 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:06:36.036 02:01:44 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:36.036 02:01:44 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:36.036 02:01:44 accel -- common/autotest_common.sh@10 -- # set +x 00:06:36.036 ************************************ 00:06:36.036 START TEST accel_compare 00:06:36.036 ************************************ 00:06:36.036 02:01:44 accel.accel_compare -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compare -y 00:06:36.036 02:01:44 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:06:36.036 02:01:44 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:06:36.036 02:01:44 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:36.036 02:01:44 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:36.036 02:01:44 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:06:36.036 02:01:44 accel.accel_compare -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:06:36.036 02:01:44 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:06:36.036 02:01:44 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:36.036 02:01:44 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:36.036 02:01:44 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:36.036 02:01:44 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:36.036 02:01:44 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:36.036 02:01:44 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:06:36.036 02:01:44 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:06:36.036 [2024-07-23 02:01:44.638195] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:06:36.036 [2024-07-23 02:01:44.638358] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62335 ] 00:06:36.295 [2024-07-23 02:01:44.813816] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.295 [2024-07-23 02:01:45.005572] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.555 02:01:45 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:36.555 02:01:45 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:36.555 02:01:45 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:36.555 02:01:45 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:36.555 02:01:45 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:36.555 02:01:45 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:36.555 02:01:45 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:36.555 02:01:45 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:36.555 02:01:45 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:06:36.555 02:01:45 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:36.555 02:01:45 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:36.555 02:01:45 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:36.555 02:01:45 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:36.555 02:01:45 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:36.555 02:01:45 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:36.555 02:01:45 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:36.555 02:01:45 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:36.555 02:01:45 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:36.555 02:01:45 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:36.555 02:01:45 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:36.555 02:01:45 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:06:36.555 02:01:45 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:36.555 02:01:45 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:06:36.555 02:01:45 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:36.555 02:01:45 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:36.555 02:01:45 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:36.555 02:01:45 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:36.555 02:01:45 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:36.555 02:01:45 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:36.555 02:01:45 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:36.555 02:01:45 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:36.555 02:01:45 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:36.555 02:01:45 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:36.555 02:01:45 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:06:36.555 02:01:45 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:36.555 02:01:45 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:06:36.555 02:01:45 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:36.555 02:01:45 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:36.555 02:01:45 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:06:36.555 02:01:45 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:36.555 02:01:45 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:36.555 02:01:45 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:36.555 02:01:45 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:06:36.555 02:01:45 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:36.555 02:01:45 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:36.555 02:01:45 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:36.555 02:01:45 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:06:36.555 02:01:45 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:36.555 02:01:45 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:36.555 02:01:45 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:36.555 02:01:45 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:06:36.555 02:01:45 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:36.555 02:01:45 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:36.555 02:01:45 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:36.555 02:01:45 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:06:36.555 02:01:45 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:36.555 02:01:45 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:36.555 02:01:45 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:36.555 02:01:45 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:36.555 02:01:45 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:36.555 02:01:45 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:36.556 02:01:45 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:36.556 02:01:45 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:36.556 02:01:45 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:36.556 02:01:45 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:36.556 02:01:45 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:38.461 02:01:46 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:38.461 02:01:46 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:38.461 02:01:46 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:38.461 02:01:46 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:38.461 02:01:46 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:38.461 02:01:46 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:38.461 02:01:46 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:38.461 02:01:46 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:38.461 02:01:46 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:38.461 02:01:46 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:38.461 02:01:46 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:38.461 02:01:46 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:38.461 02:01:46 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:38.461 02:01:46 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:38.461 02:01:46 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:38.461 02:01:46 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:38.461 02:01:46 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:38.461 02:01:46 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:38.461 02:01:46 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:38.461 02:01:46 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:38.461 02:01:46 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:38.461 02:01:46 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:38.461 02:01:46 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:38.461 02:01:46 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:38.461 02:01:46 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:38.461 02:01:46 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:06:38.461 02:01:46 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:38.461 00:06:38.461 real 0m2.324s 00:06:38.461 user 0m2.025s 00:06:38.461 sys 0m0.203s 00:06:38.461 ************************************ 00:06:38.461 END TEST accel_compare 00:06:38.461 ************************************ 00:06:38.461 02:01:46 accel.accel_compare -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:38.461 02:01:46 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:06:38.461 02:01:46 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:38.461 02:01:46 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:06:38.461 02:01:46 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:38.461 02:01:46 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:38.461 02:01:46 accel -- common/autotest_common.sh@10 -- # set +x 00:06:38.461 ************************************ 00:06:38.461 START TEST accel_xor 00:06:38.461 ************************************ 00:06:38.461 02:01:46 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y 00:06:38.461 02:01:46 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:06:38.461 02:01:46 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:06:38.461 02:01:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:38.461 02:01:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:38.461 02:01:46 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:06:38.461 02:01:46 accel.accel_xor -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:06:38.461 02:01:46 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:06:38.461 02:01:46 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:38.461 02:01:46 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:38.461 02:01:46 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:38.461 02:01:46 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:38.461 02:01:46 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:38.461 02:01:46 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:06:38.461 02:01:46 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:06:38.461 [2024-07-23 02:01:47.013152] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:06:38.461 [2024-07-23 02:01:47.013323] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62382 ] 00:06:38.461 [2024-07-23 02:01:47.188063] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.720 [2024-07-23 02:01:47.432059] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.979 02:01:47 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:38.979 02:01:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:38.979 02:01:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:38.979 02:01:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:38.979 02:01:47 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:38.979 02:01:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:38.979 02:01:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:38.979 02:01:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:38.979 02:01:47 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:06:38.979 02:01:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:38.979 02:01:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:38.979 02:01:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:38.979 02:01:47 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:38.979 02:01:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:38.979 02:01:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:38.979 02:01:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:38.979 02:01:47 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:38.979 02:01:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:38.979 02:01:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:38.979 02:01:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:38.979 02:01:47 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:06:38.979 02:01:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:38.979 02:01:47 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:06:38.979 02:01:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:38.979 02:01:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:38.979 02:01:47 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:06:38.979 02:01:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:38.979 02:01:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:38.979 02:01:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:38.979 02:01:47 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:38.979 02:01:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:38.979 02:01:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:38.979 02:01:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:38.979 02:01:47 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:38.979 02:01:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:38.979 02:01:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:38.979 02:01:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:38.979 02:01:47 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:06:38.979 02:01:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:38.979 02:01:47 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:06:38.979 02:01:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:38.979 02:01:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:38.979 02:01:47 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:38.980 02:01:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:38.980 02:01:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:38.980 02:01:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:38.980 02:01:47 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:38.980 02:01:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:38.980 02:01:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:38.980 02:01:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:38.980 02:01:47 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:06:38.980 02:01:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:38.980 02:01:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:38.980 02:01:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:38.980 02:01:47 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:06:38.980 02:01:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:38.980 02:01:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:38.980 02:01:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:38.980 02:01:47 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:06:38.980 02:01:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:38.980 02:01:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:38.980 02:01:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:38.980 02:01:47 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:38.980 02:01:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:38.980 02:01:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:38.980 02:01:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:38.980 02:01:47 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:38.980 02:01:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:38.980 02:01:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:38.980 02:01:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:40.882 02:01:49 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:40.882 02:01:49 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:40.882 02:01:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:40.882 02:01:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:40.882 02:01:49 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:40.882 02:01:49 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:40.882 02:01:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:40.882 02:01:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:40.882 02:01:49 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:40.882 02:01:49 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:40.882 02:01:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:40.882 02:01:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:40.882 02:01:49 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:40.882 02:01:49 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:40.882 02:01:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:40.882 02:01:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:40.882 02:01:49 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:40.882 02:01:49 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:40.882 02:01:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:40.882 02:01:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:40.882 02:01:49 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:40.882 02:01:49 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:40.882 02:01:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:40.882 02:01:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:40.882 02:01:49 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:40.882 02:01:49 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:06:40.882 02:01:49 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:40.882 00:06:40.882 real 0m2.367s 00:06:40.882 user 0m2.064s 00:06:40.882 sys 0m0.203s 00:06:40.882 02:01:49 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:40.882 02:01:49 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:06:40.882 ************************************ 00:06:40.882 END TEST accel_xor 00:06:40.882 ************************************ 00:06:40.882 02:01:49 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:40.882 02:01:49 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:06:40.882 02:01:49 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:40.882 02:01:49 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:40.882 02:01:49 accel -- common/autotest_common.sh@10 -- # set +x 00:06:40.882 ************************************ 00:06:40.882 START TEST accel_xor 00:06:40.882 ************************************ 00:06:40.882 02:01:49 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y -x 3 00:06:40.882 02:01:49 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:06:40.882 02:01:49 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:06:40.882 02:01:49 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:40.882 02:01:49 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:40.882 02:01:49 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:06:40.882 02:01:49 accel.accel_xor -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:06:40.882 02:01:49 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:06:40.882 02:01:49 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:40.882 02:01:49 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:40.882 02:01:49 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:40.882 02:01:49 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:40.882 02:01:49 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:40.882 02:01:49 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:06:40.882 02:01:49 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:06:40.883 [2024-07-23 02:01:49.436522] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:06:40.883 [2024-07-23 02:01:49.436701] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62427 ] 00:06:40.883 [2024-07-23 02:01:49.613364] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.141 [2024-07-23 02:01:49.861933] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.401 02:01:50 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:41.401 02:01:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:41.401 02:01:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:41.401 02:01:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:41.401 02:01:50 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:41.401 02:01:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:41.401 02:01:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:41.401 02:01:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:41.401 02:01:50 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:06:41.401 02:01:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:41.401 02:01:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:41.401 02:01:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:41.401 02:01:50 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:41.401 02:01:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:41.401 02:01:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:41.401 02:01:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:41.401 02:01:50 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:41.401 02:01:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:41.401 02:01:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:41.401 02:01:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:41.401 02:01:50 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:06:41.401 02:01:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:41.401 02:01:50 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:06:41.401 02:01:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:41.401 02:01:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:41.401 02:01:50 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:06:41.401 02:01:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:41.401 02:01:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:41.401 02:01:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:41.401 02:01:50 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:41.401 02:01:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:41.401 02:01:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:41.401 02:01:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:41.401 02:01:50 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:41.401 02:01:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:41.401 02:01:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:41.401 02:01:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:41.401 02:01:50 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:06:41.401 02:01:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:41.401 02:01:50 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:06:41.401 02:01:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:41.401 02:01:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:41.401 02:01:50 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:41.401 02:01:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:41.401 02:01:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:41.401 02:01:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:41.401 02:01:50 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:41.401 02:01:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:41.401 02:01:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:41.401 02:01:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:41.401 02:01:50 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:06:41.401 02:01:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:41.401 02:01:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:41.401 02:01:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:41.401 02:01:50 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:06:41.401 02:01:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:41.401 02:01:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:41.401 02:01:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:41.401 02:01:50 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:06:41.401 02:01:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:41.401 02:01:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:41.401 02:01:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:41.401 02:01:50 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:41.401 02:01:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:41.401 02:01:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:41.401 02:01:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:41.401 02:01:50 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:41.401 02:01:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:41.401 02:01:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:41.401 02:01:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:43.304 02:01:51 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:43.304 02:01:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:43.304 02:01:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:43.304 02:01:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:43.304 02:01:51 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:43.304 02:01:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:43.304 02:01:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:43.304 02:01:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:43.304 02:01:51 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:43.304 02:01:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:43.304 02:01:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:43.304 02:01:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:43.304 02:01:51 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:43.304 02:01:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:43.304 02:01:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:43.304 02:01:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:43.304 02:01:51 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:43.304 02:01:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:43.304 02:01:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:43.304 02:01:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:43.304 02:01:51 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:43.304 02:01:51 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:43.304 02:01:51 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:43.304 02:01:51 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:43.304 02:01:51 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:43.304 02:01:51 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:06:43.304 02:01:51 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:43.304 00:06:43.304 real 0m2.382s 00:06:43.304 user 0m2.087s 00:06:43.304 sys 0m0.201s 00:06:43.304 02:01:51 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:43.304 02:01:51 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:06:43.304 ************************************ 00:06:43.304 END TEST accel_xor 00:06:43.304 ************************************ 00:06:43.304 02:01:51 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:43.304 02:01:51 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:06:43.304 02:01:51 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:43.304 02:01:51 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:43.304 02:01:51 accel -- common/autotest_common.sh@10 -- # set +x 00:06:43.304 ************************************ 00:06:43.304 START TEST accel_dif_verify 00:06:43.304 ************************************ 00:06:43.304 02:01:51 accel.accel_dif_verify -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_verify 00:06:43.304 02:01:51 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:06:43.304 02:01:51 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:06:43.304 02:01:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:43.304 02:01:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:43.304 02:01:51 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:06:43.304 02:01:51 accel.accel_dif_verify -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:06:43.304 02:01:51 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:06:43.304 02:01:51 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:43.304 02:01:51 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:43.304 02:01:51 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:43.304 02:01:51 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:43.305 02:01:51 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:43.305 02:01:51 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:06:43.305 02:01:51 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:06:43.305 [2024-07-23 02:01:51.873855] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:06:43.305 [2024-07-23 02:01:51.874009] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62469 ] 00:06:43.305 [2024-07-23 02:01:52.047542] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.563 [2024-07-23 02:01:52.239772] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.822 02:01:52 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:43.822 02:01:52 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:43.822 02:01:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:43.822 02:01:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:43.822 02:01:52 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:43.822 02:01:52 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:43.822 02:01:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:43.822 02:01:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:43.822 02:01:52 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:06:43.822 02:01:52 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:43.822 02:01:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:43.822 02:01:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:43.822 02:01:52 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:43.822 02:01:52 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:43.822 02:01:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:43.822 02:01:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:43.822 02:01:52 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:43.822 02:01:52 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:43.822 02:01:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:43.822 02:01:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:43.822 02:01:52 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:06:43.822 02:01:52 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:43.822 02:01:52 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:06:43.822 02:01:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:43.822 02:01:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:43.822 02:01:52 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:43.822 02:01:52 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:43.822 02:01:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:43.822 02:01:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:43.822 02:01:52 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:43.822 02:01:52 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:43.822 02:01:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:43.822 02:01:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:43.822 02:01:52 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:06:43.822 02:01:52 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:43.822 02:01:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:43.822 02:01:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:43.822 02:01:52 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:06:43.823 02:01:52 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:43.823 02:01:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:43.823 02:01:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:43.823 02:01:52 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:43.823 02:01:52 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:43.823 02:01:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:43.823 02:01:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:43.823 02:01:52 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:06:43.823 02:01:52 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:43.823 02:01:52 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:06:43.823 02:01:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:43.823 02:01:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:43.823 02:01:52 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:06:43.823 02:01:52 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:43.823 02:01:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:43.823 02:01:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:43.823 02:01:52 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:06:43.823 02:01:52 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:43.823 02:01:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:43.823 02:01:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:43.823 02:01:52 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:06:43.823 02:01:52 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:43.823 02:01:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:43.823 02:01:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:43.823 02:01:52 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:06:43.823 02:01:52 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:43.823 02:01:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:43.823 02:01:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:43.823 02:01:52 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:06:43.823 02:01:52 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:43.823 02:01:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:43.823 02:01:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:43.823 02:01:52 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:43.823 02:01:52 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:43.823 02:01:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:43.823 02:01:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:43.823 02:01:52 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:43.823 02:01:52 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:43.823 02:01:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:43.823 02:01:52 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:45.725 02:01:54 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:45.725 02:01:54 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:45.725 02:01:54 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:45.725 02:01:54 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:45.725 02:01:54 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:45.725 02:01:54 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:45.725 02:01:54 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:45.725 02:01:54 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:45.725 02:01:54 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:45.725 02:01:54 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:45.725 02:01:54 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:45.725 02:01:54 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:45.725 02:01:54 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:45.725 02:01:54 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:45.725 02:01:54 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:45.725 02:01:54 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:45.725 02:01:54 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:45.725 02:01:54 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:45.725 02:01:54 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:45.725 02:01:54 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:45.725 02:01:54 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:45.725 02:01:54 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:45.725 02:01:54 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:45.725 02:01:54 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:45.725 02:01:54 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:45.725 02:01:54 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:06:45.725 02:01:54 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:45.725 ************************************ 00:06:45.725 END TEST accel_dif_verify 00:06:45.725 ************************************ 00:06:45.725 00:06:45.725 real 0m2.320s 00:06:45.725 user 0m2.030s 00:06:45.725 sys 0m0.196s 00:06:45.725 02:01:54 accel.accel_dif_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:45.725 02:01:54 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:06:45.725 02:01:54 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:45.725 02:01:54 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:06:45.725 02:01:54 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:45.725 02:01:54 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:45.725 02:01:54 accel -- common/autotest_common.sh@10 -- # set +x 00:06:45.725 ************************************ 00:06:45.725 START TEST accel_dif_generate 00:06:45.725 ************************************ 00:06:45.725 02:01:54 accel.accel_dif_generate -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate 00:06:45.725 02:01:54 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:06:45.725 02:01:54 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:06:45.725 02:01:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:45.725 02:01:54 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:06:45.725 02:01:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:45.725 02:01:54 accel.accel_dif_generate -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:06:45.725 02:01:54 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:06:45.725 02:01:54 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:45.725 02:01:54 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:45.725 02:01:54 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:45.725 02:01:54 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:45.725 02:01:54 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:45.725 02:01:54 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:06:45.725 02:01:54 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:06:45.725 [2024-07-23 02:01:54.247648] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:06:45.725 [2024-07-23 02:01:54.247812] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62516 ] 00:06:45.725 [2024-07-23 02:01:54.420593] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.984 [2024-07-23 02:01:54.614209] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.243 02:01:54 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:46.243 02:01:54 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:46.243 02:01:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:46.243 02:01:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:46.243 02:01:54 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:46.243 02:01:54 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:46.243 02:01:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:46.243 02:01:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:46.243 02:01:54 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:06:46.243 02:01:54 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:46.243 02:01:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:46.243 02:01:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:46.243 02:01:54 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:46.243 02:01:54 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:46.243 02:01:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:46.243 02:01:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:46.243 02:01:54 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:46.243 02:01:54 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:46.243 02:01:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:46.243 02:01:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:46.243 02:01:54 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:06:46.243 02:01:54 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:46.243 02:01:54 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:06:46.243 02:01:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:46.243 02:01:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:46.243 02:01:54 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:46.243 02:01:54 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:46.243 02:01:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:46.243 02:01:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:46.243 02:01:54 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:46.243 02:01:54 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:46.243 02:01:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:46.243 02:01:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:46.243 02:01:54 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:06:46.243 02:01:54 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:46.243 02:01:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:46.243 02:01:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:46.243 02:01:54 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:06:46.243 02:01:54 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:46.243 02:01:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:46.243 02:01:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:46.243 02:01:54 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:46.243 02:01:54 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:46.243 02:01:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:46.243 02:01:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:46.243 02:01:54 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:06:46.243 02:01:54 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:46.243 02:01:54 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:06:46.244 02:01:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:46.244 02:01:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:46.244 02:01:54 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:06:46.244 02:01:54 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:46.244 02:01:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:46.244 02:01:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:46.244 02:01:54 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:06:46.244 02:01:54 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:46.244 02:01:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:46.244 02:01:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:46.244 02:01:54 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:06:46.244 02:01:54 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:46.244 02:01:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:46.244 02:01:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:46.244 02:01:54 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:06:46.244 02:01:54 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:46.244 02:01:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:46.244 02:01:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:46.244 02:01:54 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:06:46.244 02:01:54 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:46.244 02:01:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:46.244 02:01:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:46.244 02:01:54 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:46.244 02:01:54 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:46.244 02:01:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:46.244 02:01:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:46.244 02:01:54 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:46.244 02:01:54 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:46.244 02:01:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:46.244 02:01:54 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:48.149 02:01:56 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:48.149 02:01:56 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:48.149 02:01:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:48.149 02:01:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:48.149 02:01:56 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:48.149 02:01:56 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:48.149 02:01:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:48.149 02:01:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:48.149 02:01:56 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:48.149 02:01:56 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:48.149 02:01:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:48.149 02:01:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:48.149 02:01:56 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:48.149 02:01:56 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:48.149 02:01:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:48.149 02:01:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:48.149 02:01:56 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:48.149 02:01:56 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:48.149 02:01:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:48.149 02:01:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:48.149 02:01:56 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:48.149 02:01:56 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:48.149 02:01:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:48.149 02:01:56 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:48.149 02:01:56 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:48.149 02:01:56 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:06:48.149 02:01:56 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:48.149 00:06:48.149 real 0m2.317s 00:06:48.149 user 0m0.015s 00:06:48.149 sys 0m0.004s 00:06:48.150 02:01:56 accel.accel_dif_generate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:48.150 ************************************ 00:06:48.150 END TEST accel_dif_generate 00:06:48.150 ************************************ 00:06:48.150 02:01:56 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:06:48.150 02:01:56 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:48.150 02:01:56 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:06:48.150 02:01:56 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:48.150 02:01:56 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:48.150 02:01:56 accel -- common/autotest_common.sh@10 -- # set +x 00:06:48.150 ************************************ 00:06:48.150 START TEST accel_dif_generate_copy 00:06:48.150 ************************************ 00:06:48.150 02:01:56 accel.accel_dif_generate_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate_copy 00:06:48.150 02:01:56 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:06:48.150 02:01:56 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:06:48.150 02:01:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:48.150 02:01:56 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:48.150 02:01:56 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:06:48.150 02:01:56 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:06:48.150 02:01:56 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:06:48.150 02:01:56 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:48.150 02:01:56 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:48.150 02:01:56 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:48.150 02:01:56 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:48.150 02:01:56 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:48.150 02:01:56 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:06:48.150 02:01:56 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:06:48.150 [2024-07-23 02:01:56.616526] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:06:48.150 [2024-07-23 02:01:56.616685] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62557 ] 00:06:48.150 [2024-07-23 02:01:56.792215] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.409 [2024-07-23 02:01:56.997323] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.409 02:01:57 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:48.409 02:01:57 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:48.409 02:01:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:48.409 02:01:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:48.409 02:01:57 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:48.409 02:01:57 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:48.409 02:01:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:48.409 02:01:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:48.409 02:01:57 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:06:48.409 02:01:57 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:48.409 02:01:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:48.409 02:01:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:48.409 02:01:57 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:48.409 02:01:57 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:48.409 02:01:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:48.409 02:01:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:48.409 02:01:57 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:48.409 02:01:57 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:48.409 02:01:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:48.409 02:01:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:48.409 02:01:57 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:06:48.409 02:01:57 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:48.409 02:01:57 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:06:48.409 02:01:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:48.409 02:01:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:48.409 02:01:57 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:48.409 02:01:57 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:48.409 02:01:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:48.409 02:01:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:48.409 02:01:57 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:48.409 02:01:57 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:48.409 02:01:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:48.409 02:01:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:48.409 02:01:57 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:48.409 02:01:57 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:48.409 02:01:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:48.409 02:01:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:48.409 02:01:57 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:06:48.409 02:01:57 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:48.409 02:01:57 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:06:48.409 02:01:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:48.409 02:01:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:48.409 02:01:57 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:06:48.409 02:01:57 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:48.409 02:01:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:48.409 02:01:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:48.409 02:01:57 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:06:48.409 02:01:57 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:48.409 02:01:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:48.409 02:01:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:48.409 02:01:57 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:06:48.409 02:01:57 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:48.409 02:01:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:48.409 02:01:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:48.409 02:01:57 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:06:48.409 02:01:57 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:48.409 02:01:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:48.409 02:01:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:48.409 02:01:57 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:06:48.409 02:01:57 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:48.409 02:01:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:48.409 02:01:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:48.409 02:01:57 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:48.409 02:01:57 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:48.409 02:01:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:48.409 02:01:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:48.409 02:01:57 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:48.409 02:01:57 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:48.409 02:01:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:48.409 02:01:57 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:50.315 02:01:58 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:50.315 02:01:58 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:50.315 02:01:58 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:50.315 02:01:58 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:50.315 02:01:58 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:50.315 02:01:58 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:50.315 02:01:58 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:50.315 02:01:58 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:50.315 02:01:58 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:50.315 02:01:58 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:50.315 02:01:58 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:50.315 02:01:58 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:50.315 02:01:58 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:50.315 02:01:58 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:50.315 02:01:58 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:50.315 02:01:58 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:50.315 02:01:58 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:50.315 02:01:58 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:50.315 02:01:58 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:50.315 02:01:58 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:50.315 02:01:58 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:50.315 02:01:58 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:50.315 02:01:58 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:50.315 02:01:58 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:50.315 02:01:58 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:50.315 02:01:58 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:06:50.315 02:01:58 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:50.315 00:06:50.315 real 0m2.321s 00:06:50.315 user 0m2.031s 00:06:50.315 sys 0m0.197s 00:06:50.315 02:01:58 accel.accel_dif_generate_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:50.315 02:01:58 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:06:50.315 ************************************ 00:06:50.315 END TEST accel_dif_generate_copy 00:06:50.315 ************************************ 00:06:50.315 02:01:58 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:50.315 02:01:58 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:06:50.315 02:01:58 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:50.315 02:01:58 accel -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:06:50.315 02:01:58 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:50.315 02:01:58 accel -- common/autotest_common.sh@10 -- # set +x 00:06:50.315 ************************************ 00:06:50.315 START TEST accel_comp 00:06:50.315 ************************************ 00:06:50.315 02:01:58 accel.accel_comp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:50.315 02:01:58 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:06:50.315 02:01:58 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:06:50.315 02:01:58 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:50.315 02:01:58 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:50.315 02:01:58 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:50.315 02:01:58 accel.accel_comp -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:50.315 02:01:58 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:06:50.315 02:01:58 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:50.315 02:01:58 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:50.315 02:01:58 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:50.315 02:01:58 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:50.315 02:01:58 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:50.315 02:01:58 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:06:50.315 02:01:58 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:06:50.315 [2024-07-23 02:01:58.991460] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:06:50.315 [2024-07-23 02:01:58.992273] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62602 ] 00:06:50.574 [2024-07-23 02:01:59.164417] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.834 [2024-07-23 02:01:59.355358] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.834 02:01:59 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:50.834 02:01:59 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:50.834 02:01:59 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:50.834 02:01:59 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:50.834 02:01:59 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:50.834 02:01:59 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:50.834 02:01:59 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:50.834 02:01:59 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:50.834 02:01:59 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:50.834 02:01:59 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:50.834 02:01:59 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:50.834 02:01:59 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:50.834 02:01:59 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:06:50.834 02:01:59 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:50.834 02:01:59 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:50.834 02:01:59 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:50.834 02:01:59 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:50.834 02:01:59 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:50.834 02:01:59 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:50.834 02:01:59 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:50.834 02:01:59 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:50.834 02:01:59 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:50.834 02:01:59 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:50.834 02:01:59 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:50.834 02:01:59 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:06:50.834 02:01:59 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:50.834 02:01:59 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:06:50.834 02:01:59 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:50.834 02:01:59 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:50.834 02:01:59 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:50.834 02:01:59 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:50.834 02:01:59 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:50.834 02:01:59 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:50.834 02:01:59 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:50.834 02:01:59 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:50.834 02:01:59 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:50.834 02:01:59 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:50.834 02:01:59 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:06:50.834 02:01:59 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:50.834 02:01:59 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:06:50.834 02:01:59 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:50.834 02:01:59 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:50.834 02:01:59 accel.accel_comp -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:50.834 02:01:59 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:50.834 02:01:59 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:50.834 02:01:59 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:50.834 02:01:59 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:06:50.834 02:01:59 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:50.834 02:01:59 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:50.834 02:01:59 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:50.834 02:01:59 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:06:50.834 02:01:59 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:50.834 02:01:59 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:50.834 02:01:59 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:50.834 02:01:59 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:06:50.834 02:01:59 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:50.834 02:01:59 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:50.834 02:01:59 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:50.834 02:01:59 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:06:50.834 02:01:59 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:50.834 02:01:59 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:50.834 02:01:59 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:50.834 02:01:59 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:06:50.834 02:01:59 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:50.834 02:01:59 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:50.834 02:01:59 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:50.834 02:01:59 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:50.834 02:01:59 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:50.834 02:01:59 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:50.834 02:01:59 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:50.834 02:01:59 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:50.834 02:01:59 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:50.834 02:01:59 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:50.834 02:01:59 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:52.739 02:02:01 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:52.739 02:02:01 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:52.739 02:02:01 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:52.739 02:02:01 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:52.739 02:02:01 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:52.739 02:02:01 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:52.739 02:02:01 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:52.739 02:02:01 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:52.739 02:02:01 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:52.739 02:02:01 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:52.739 02:02:01 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:52.739 02:02:01 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:52.739 02:02:01 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:52.739 02:02:01 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:52.739 02:02:01 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:52.739 02:02:01 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:52.739 02:02:01 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:52.739 02:02:01 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:52.739 02:02:01 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:52.739 02:02:01 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:52.739 02:02:01 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:52.739 02:02:01 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:52.739 02:02:01 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:52.739 02:02:01 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:52.739 02:02:01 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:52.739 02:02:01 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:06:52.739 02:02:01 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:52.739 00:06:52.739 real 0m2.317s 00:06:52.739 user 0m2.026s 00:06:52.739 sys 0m0.198s 00:06:52.739 02:02:01 accel.accel_comp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:52.739 02:02:01 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:06:52.739 ************************************ 00:06:52.739 END TEST accel_comp 00:06:52.739 ************************************ 00:06:52.739 02:02:01 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:52.739 02:02:01 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:52.739 02:02:01 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:52.739 02:02:01 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:52.739 02:02:01 accel -- common/autotest_common.sh@10 -- # set +x 00:06:52.739 ************************************ 00:06:52.739 START TEST accel_decomp 00:06:52.739 ************************************ 00:06:52.739 02:02:01 accel.accel_decomp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:52.739 02:02:01 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:06:52.739 02:02:01 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:06:52.739 02:02:01 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:52.739 02:02:01 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:52.739 02:02:01 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:52.739 02:02:01 accel.accel_decomp -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:52.739 02:02:01 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:06:52.739 02:02:01 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:52.739 02:02:01 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:52.739 02:02:01 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:52.739 02:02:01 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:52.739 02:02:01 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:52.739 02:02:01 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:06:52.739 02:02:01 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:06:52.739 [2024-07-23 02:02:01.363859] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:06:52.739 [2024-07-23 02:02:01.364008] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62650 ] 00:06:52.998 [2024-07-23 02:02:01.536144] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.998 [2024-07-23 02:02:01.725863] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.257 02:02:01 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:53.257 02:02:01 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:53.257 02:02:01 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:53.257 02:02:01 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:53.257 02:02:01 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:53.257 02:02:01 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:53.257 02:02:01 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:53.257 02:02:01 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:53.257 02:02:01 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:53.257 02:02:01 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:53.257 02:02:01 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:53.257 02:02:01 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:53.257 02:02:01 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:06:53.257 02:02:01 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:53.257 02:02:01 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:53.257 02:02:01 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:53.257 02:02:01 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:53.257 02:02:01 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:53.257 02:02:01 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:53.257 02:02:01 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:53.257 02:02:01 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:53.257 02:02:01 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:53.257 02:02:01 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:53.257 02:02:01 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:53.257 02:02:01 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:06:53.257 02:02:01 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:53.257 02:02:01 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:53.257 02:02:01 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:53.257 02:02:01 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:53.257 02:02:01 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:53.257 02:02:01 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:53.257 02:02:01 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:53.257 02:02:01 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:53.257 02:02:01 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:53.257 02:02:01 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:53.257 02:02:01 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:53.257 02:02:01 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:53.257 02:02:01 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:06:53.257 02:02:01 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:53.257 02:02:01 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:06:53.257 02:02:01 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:53.257 02:02:01 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:53.257 02:02:01 accel.accel_decomp -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:53.257 02:02:01 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:53.257 02:02:01 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:53.257 02:02:01 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:53.257 02:02:01 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:06:53.257 02:02:01 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:53.257 02:02:01 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:53.257 02:02:01 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:53.257 02:02:01 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:06:53.257 02:02:01 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:53.257 02:02:01 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:53.257 02:02:01 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:53.257 02:02:01 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:06:53.257 02:02:01 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:53.257 02:02:01 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:53.257 02:02:01 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:53.257 02:02:01 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:06:53.257 02:02:01 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:53.257 02:02:01 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:53.257 02:02:01 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:53.257 02:02:01 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:06:53.257 02:02:01 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:53.257 02:02:01 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:53.257 02:02:01 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:53.257 02:02:01 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:53.257 02:02:01 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:53.257 02:02:01 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:53.257 02:02:01 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:53.257 02:02:01 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:53.257 02:02:01 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:53.257 02:02:01 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:53.257 02:02:01 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:55.162 02:02:03 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:55.162 02:02:03 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:55.162 02:02:03 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:55.162 02:02:03 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:55.162 02:02:03 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:55.162 02:02:03 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:55.162 02:02:03 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:55.162 02:02:03 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:55.162 02:02:03 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:55.162 02:02:03 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:55.162 02:02:03 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:55.162 02:02:03 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:55.162 02:02:03 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:55.162 02:02:03 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:55.162 02:02:03 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:55.162 02:02:03 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:55.162 02:02:03 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:55.162 02:02:03 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:55.162 02:02:03 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:55.162 02:02:03 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:55.162 02:02:03 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:55.162 02:02:03 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:55.162 02:02:03 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:55.162 02:02:03 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:55.162 02:02:03 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:55.162 02:02:03 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:55.162 02:02:03 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:55.162 00:06:55.162 real 0m2.318s 00:06:55.162 user 0m2.017s 00:06:55.162 sys 0m0.205s 00:06:55.162 02:02:03 accel.accel_decomp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:55.162 ************************************ 00:06:55.162 END TEST accel_decomp 00:06:55.162 ************************************ 00:06:55.162 02:02:03 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:06:55.162 02:02:03 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:55.162 02:02:03 accel -- accel/accel.sh@118 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:06:55.162 02:02:03 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:06:55.162 02:02:03 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:55.162 02:02:03 accel -- common/autotest_common.sh@10 -- # set +x 00:06:55.162 ************************************ 00:06:55.162 START TEST accel_decomp_full 00:06:55.162 ************************************ 00:06:55.162 02:02:03 accel.accel_decomp_full -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:06:55.162 02:02:03 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:06:55.162 02:02:03 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:06:55.162 02:02:03 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:55.162 02:02:03 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:06:55.162 02:02:03 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:55.162 02:02:03 accel.accel_decomp_full -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:06:55.162 02:02:03 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:06:55.162 02:02:03 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:55.162 02:02:03 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:55.162 02:02:03 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:55.162 02:02:03 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:55.162 02:02:03 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:55.162 02:02:03 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:06:55.162 02:02:03 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:06:55.162 [2024-07-23 02:02:03.737987] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:06:55.162 [2024-07-23 02:02:03.738154] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62691 ] 00:06:55.162 [2024-07-23 02:02:03.912646] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.423 [2024-07-23 02:02:04.103253] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.699 02:02:04 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:55.699 02:02:04 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:55.699 02:02:04 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:55.699 02:02:04 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:55.699 02:02:04 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:55.699 02:02:04 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:55.699 02:02:04 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:55.699 02:02:04 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:55.699 02:02:04 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:55.699 02:02:04 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:55.699 02:02:04 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:55.699 02:02:04 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:55.699 02:02:04 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:06:55.699 02:02:04 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:55.699 02:02:04 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:55.699 02:02:04 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:55.699 02:02:04 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:55.699 02:02:04 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:55.699 02:02:04 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:55.699 02:02:04 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:55.699 02:02:04 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:55.699 02:02:04 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:55.699 02:02:04 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:55.699 02:02:04 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:55.699 02:02:04 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:06:55.699 02:02:04 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:55.699 02:02:04 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:55.699 02:02:04 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:55.699 02:02:04 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:55.699 02:02:04 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:55.699 02:02:04 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:55.699 02:02:04 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:55.699 02:02:04 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:55.699 02:02:04 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:55.699 02:02:04 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:55.699 02:02:04 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:55.699 02:02:04 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:55.699 02:02:04 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:06:55.699 02:02:04 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:55.699 02:02:04 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:06:55.699 02:02:04 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:55.699 02:02:04 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:55.699 02:02:04 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:55.699 02:02:04 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:55.699 02:02:04 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:55.699 02:02:04 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:55.699 02:02:04 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:06:55.699 02:02:04 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:55.699 02:02:04 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:55.699 02:02:04 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:55.699 02:02:04 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:06:55.699 02:02:04 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:55.699 02:02:04 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:55.699 02:02:04 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:55.699 02:02:04 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:06:55.699 02:02:04 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:55.699 02:02:04 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:55.700 02:02:04 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:55.700 02:02:04 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:06:55.700 02:02:04 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:55.700 02:02:04 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:55.700 02:02:04 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:55.700 02:02:04 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:06:55.700 02:02:04 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:55.700 02:02:04 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:55.700 02:02:04 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:55.700 02:02:04 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:55.700 02:02:04 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:55.700 02:02:04 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:55.700 02:02:04 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:55.700 02:02:04 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:55.700 02:02:04 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:55.700 02:02:04 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:55.700 02:02:04 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:57.617 02:02:06 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:57.617 02:02:06 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:57.617 02:02:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:57.617 02:02:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:57.617 02:02:06 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:57.617 02:02:06 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:57.617 02:02:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:57.618 02:02:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:57.618 02:02:06 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:57.618 02:02:06 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:57.618 02:02:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:57.618 02:02:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:57.618 02:02:06 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:57.618 02:02:06 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:57.618 02:02:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:57.618 02:02:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:57.618 02:02:06 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:57.618 02:02:06 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:57.618 02:02:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:57.618 02:02:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:57.618 02:02:06 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:57.618 02:02:06 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:57.618 02:02:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:57.618 02:02:06 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:57.618 02:02:06 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:57.618 02:02:06 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:57.618 02:02:06 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:57.618 00:06:57.618 real 0m2.343s 00:06:57.618 user 0m0.017s 00:06:57.618 sys 0m0.004s 00:06:57.618 ************************************ 00:06:57.618 END TEST accel_decomp_full 00:06:57.618 02:02:06 accel.accel_decomp_full -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:57.618 02:02:06 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:06:57.618 ************************************ 00:06:57.618 02:02:06 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:57.618 02:02:06 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:06:57.618 02:02:06 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:06:57.618 02:02:06 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:57.618 02:02:06 accel -- common/autotest_common.sh@10 -- # set +x 00:06:57.618 ************************************ 00:06:57.618 START TEST accel_decomp_mcore 00:06:57.618 ************************************ 00:06:57.618 02:02:06 accel.accel_decomp_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:06:57.618 02:02:06 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:06:57.618 02:02:06 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:06:57.618 02:02:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:57.618 02:02:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:57.618 02:02:06 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:06:57.618 02:02:06 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:06:57.618 02:02:06 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:06:57.618 02:02:06 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:57.618 02:02:06 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:57.618 02:02:06 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:57.618 02:02:06 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:57.618 02:02:06 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:57.618 02:02:06 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:06:57.618 02:02:06 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:06:57.618 [2024-07-23 02:02:06.131502] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:06:57.618 [2024-07-23 02:02:06.131676] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62732 ] 00:06:57.618 [2024-07-23 02:02:06.305112] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:57.877 [2024-07-23 02:02:06.510623] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:57.877 [2024-07-23 02:02:06.510729] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:57.877 [2024-07-23 02:02:06.510834] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.877 [2024-07-23 02:02:06.510854] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:58.136 02:02:06 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:58.136 02:02:06 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:58.136 02:02:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:58.136 02:02:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:58.136 02:02:06 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:58.136 02:02:06 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:58.136 02:02:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:58.136 02:02:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:58.136 02:02:06 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:58.136 02:02:06 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:58.136 02:02:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:58.136 02:02:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:58.136 02:02:06 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:06:58.136 02:02:06 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:58.136 02:02:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:58.136 02:02:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:58.136 02:02:06 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:58.136 02:02:06 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:58.136 02:02:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:58.136 02:02:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:58.136 02:02:06 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:58.136 02:02:06 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:58.136 02:02:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:58.136 02:02:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:58.136 02:02:06 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:06:58.136 02:02:06 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:58.136 02:02:06 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:58.136 02:02:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:58.137 02:02:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:58.137 02:02:06 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:58.137 02:02:06 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:58.137 02:02:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:58.137 02:02:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:58.137 02:02:06 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:58.137 02:02:06 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:58.137 02:02:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:58.137 02:02:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:58.137 02:02:06 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:06:58.137 02:02:06 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:58.137 02:02:06 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:06:58.137 02:02:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:58.137 02:02:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:58.137 02:02:06 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:58.137 02:02:06 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:58.137 02:02:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:58.137 02:02:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:58.137 02:02:06 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:06:58.137 02:02:06 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:58.137 02:02:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:58.137 02:02:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:58.137 02:02:06 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:06:58.137 02:02:06 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:58.137 02:02:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:58.137 02:02:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:58.137 02:02:06 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:06:58.137 02:02:06 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:58.137 02:02:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:58.137 02:02:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:58.137 02:02:06 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:06:58.137 02:02:06 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:58.137 02:02:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:58.137 02:02:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:58.137 02:02:06 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:06:58.137 02:02:06 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:58.137 02:02:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:58.137 02:02:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:58.137 02:02:06 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:58.137 02:02:06 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:58.137 02:02:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:58.137 02:02:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:58.137 02:02:06 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:58.137 02:02:06 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:58.137 02:02:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:58.137 02:02:06 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:00.040 02:02:08 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:00.040 02:02:08 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:00.040 02:02:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:00.040 02:02:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:00.040 02:02:08 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:00.040 02:02:08 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:00.040 02:02:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:00.040 02:02:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:00.040 02:02:08 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:00.040 02:02:08 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:00.040 02:02:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:00.040 02:02:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:00.040 02:02:08 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:00.040 02:02:08 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:00.040 02:02:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:00.040 02:02:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:00.040 02:02:08 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:00.040 02:02:08 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:00.040 02:02:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:00.040 02:02:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:00.040 02:02:08 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:00.040 02:02:08 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:00.040 02:02:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:00.040 02:02:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:00.040 02:02:08 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:00.040 02:02:08 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:00.040 02:02:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:00.040 02:02:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:00.040 02:02:08 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:00.040 02:02:08 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:00.040 02:02:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:00.040 02:02:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:00.040 02:02:08 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:00.040 02:02:08 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:00.040 02:02:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:00.040 02:02:08 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:00.040 02:02:08 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:00.040 02:02:08 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:00.040 02:02:08 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:00.040 00:07:00.040 real 0m2.445s 00:07:00.040 user 0m0.018s 00:07:00.040 sys 0m0.003s 00:07:00.040 02:02:08 accel.accel_decomp_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:00.040 ************************************ 00:07:00.040 END TEST accel_decomp_mcore 00:07:00.040 ************************************ 00:07:00.040 02:02:08 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:07:00.040 02:02:08 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:00.040 02:02:08 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:00.040 02:02:08 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:07:00.040 02:02:08 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:00.040 02:02:08 accel -- common/autotest_common.sh@10 -- # set +x 00:07:00.040 ************************************ 00:07:00.040 START TEST accel_decomp_full_mcore 00:07:00.040 ************************************ 00:07:00.040 02:02:08 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:00.040 02:02:08 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:07:00.040 02:02:08 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:07:00.040 02:02:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:00.040 02:02:08 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:00.040 02:02:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:00.040 02:02:08 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:00.040 02:02:08 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:07:00.040 02:02:08 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:00.041 02:02:08 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:00.041 02:02:08 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:00.041 02:02:08 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:00.041 02:02:08 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:00.041 02:02:08 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:07:00.041 02:02:08 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:07:00.041 [2024-07-23 02:02:08.628544] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:07:00.041 [2024-07-23 02:02:08.628707] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62787 ] 00:07:00.041 [2024-07-23 02:02:08.802668] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:00.300 [2024-07-23 02:02:09.019393] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:00.300 [2024-07-23 02:02:09.019643] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:00.300 [2024-07-23 02:02:09.019718] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:00.300 [2024-07-23 02:02:09.019969] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.559 02:02:09 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:00.559 02:02:09 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:00.559 02:02:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:00.559 02:02:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:00.559 02:02:09 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:00.559 02:02:09 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:00.559 02:02:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:00.559 02:02:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:00.559 02:02:09 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:00.559 02:02:09 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:00.559 02:02:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:00.559 02:02:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:00.559 02:02:09 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:07:00.559 02:02:09 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:00.559 02:02:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:00.559 02:02:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:00.559 02:02:09 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:00.559 02:02:09 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:00.559 02:02:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:00.559 02:02:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:00.559 02:02:09 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:00.559 02:02:09 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:00.559 02:02:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:00.559 02:02:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:00.559 02:02:09 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:07:00.559 02:02:09 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:00.559 02:02:09 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:00.559 02:02:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:00.559 02:02:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:00.559 02:02:09 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:00.559 02:02:09 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:00.559 02:02:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:00.559 02:02:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:00.559 02:02:09 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:00.559 02:02:09 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:00.559 02:02:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:00.559 02:02:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:00.559 02:02:09 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:07:00.559 02:02:09 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:00.559 02:02:09 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:07:00.559 02:02:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:00.559 02:02:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:00.559 02:02:09 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:00.559 02:02:09 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:00.560 02:02:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:00.560 02:02:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:00.560 02:02:09 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:07:00.560 02:02:09 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:00.560 02:02:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:00.560 02:02:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:00.560 02:02:09 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:07:00.560 02:02:09 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:00.560 02:02:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:00.560 02:02:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:00.560 02:02:09 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:07:00.560 02:02:09 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:00.560 02:02:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:00.560 02:02:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:00.560 02:02:09 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:07:00.560 02:02:09 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:00.560 02:02:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:00.560 02:02:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:00.560 02:02:09 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:07:00.560 02:02:09 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:00.560 02:02:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:00.560 02:02:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:00.560 02:02:09 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:00.560 02:02:09 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:00.560 02:02:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:00.560 02:02:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:00.560 02:02:09 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:00.560 02:02:09 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:00.560 02:02:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:00.560 02:02:09 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:02.465 02:02:11 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:02.465 02:02:11 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:02.465 02:02:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:02.465 02:02:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:02.465 02:02:11 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:02.465 02:02:11 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:02.465 02:02:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:02.466 02:02:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:02.466 02:02:11 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:02.466 02:02:11 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:02.466 02:02:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:02.466 02:02:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:02.466 02:02:11 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:02.466 02:02:11 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:02.466 02:02:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:02.466 02:02:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:02.466 02:02:11 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:02.466 02:02:11 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:02.466 02:02:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:02.466 02:02:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:02.466 02:02:11 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:02.466 02:02:11 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:02.466 02:02:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:02.466 02:02:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:02.466 02:02:11 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:02.466 02:02:11 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:02.466 02:02:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:02.466 02:02:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:02.466 02:02:11 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:02.466 02:02:11 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:02.466 02:02:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:02.466 02:02:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:02.466 02:02:11 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:02.466 02:02:11 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:02.466 02:02:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:02.466 02:02:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:02.466 02:02:11 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:02.466 02:02:11 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:02.466 02:02:11 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:02.466 00:07:02.466 real 0m2.499s 00:07:02.466 user 0m0.018s 00:07:02.466 sys 0m0.004s 00:07:02.466 02:02:11 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:02.466 ************************************ 00:07:02.466 END TEST accel_decomp_full_mcore 00:07:02.466 ************************************ 00:07:02.466 02:02:11 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:07:02.466 02:02:11 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:02.466 02:02:11 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:02.466 02:02:11 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:07:02.466 02:02:11 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:02.466 02:02:11 accel -- common/autotest_common.sh@10 -- # set +x 00:07:02.466 ************************************ 00:07:02.466 START TEST accel_decomp_mthread 00:07:02.466 ************************************ 00:07:02.466 02:02:11 accel.accel_decomp_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:02.466 02:02:11 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:07:02.466 02:02:11 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:07:02.466 02:02:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:02.466 02:02:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:02.466 02:02:11 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:02.466 02:02:11 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:02.466 02:02:11 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:07:02.466 02:02:11 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:02.466 02:02:11 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:02.466 02:02:11 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:02.466 02:02:11 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:02.466 02:02:11 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:02.466 02:02:11 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:07:02.466 02:02:11 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:07:02.466 [2024-07-23 02:02:11.179737] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:07:02.466 [2024-07-23 02:02:11.179920] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62831 ] 00:07:02.725 [2024-07-23 02:02:11.356444] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.983 [2024-07-23 02:02:11.578220] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.241 02:02:11 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:03.241 02:02:11 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:03.241 02:02:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:03.241 02:02:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:03.241 02:02:11 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:03.241 02:02:11 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:03.241 02:02:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:03.241 02:02:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:03.241 02:02:11 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:03.241 02:02:11 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:03.241 02:02:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:03.241 02:02:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:03.241 02:02:11 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:07:03.241 02:02:11 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:03.241 02:02:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:03.241 02:02:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:03.241 02:02:11 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:03.241 02:02:11 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:03.241 02:02:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:03.241 02:02:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:03.241 02:02:11 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:03.241 02:02:11 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:03.241 02:02:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:03.241 02:02:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:03.241 02:02:11 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:07:03.241 02:02:11 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:03.241 02:02:11 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:03.241 02:02:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:03.242 02:02:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:03.242 02:02:11 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:03.242 02:02:11 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:03.242 02:02:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:03.242 02:02:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:03.242 02:02:11 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:03.242 02:02:11 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:03.242 02:02:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:03.242 02:02:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:03.242 02:02:11 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:07:03.242 02:02:11 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:03.242 02:02:11 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:07:03.242 02:02:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:03.242 02:02:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:03.242 02:02:11 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:03.242 02:02:11 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:03.242 02:02:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:03.242 02:02:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:03.242 02:02:11 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:07:03.242 02:02:11 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:03.242 02:02:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:03.242 02:02:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:03.242 02:02:11 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:07:03.242 02:02:11 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:03.242 02:02:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:03.242 02:02:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:03.242 02:02:11 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:07:03.242 02:02:11 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:03.242 02:02:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:03.242 02:02:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:03.242 02:02:11 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:07:03.242 02:02:11 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:03.242 02:02:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:03.242 02:02:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:03.242 02:02:11 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:07:03.242 02:02:11 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:03.242 02:02:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:03.242 02:02:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:03.242 02:02:11 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:03.242 02:02:11 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:03.242 02:02:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:03.242 02:02:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:03.242 02:02:11 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:03.242 02:02:11 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:03.242 02:02:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:03.242 02:02:11 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:05.143 02:02:13 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:05.143 02:02:13 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:05.143 02:02:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:05.143 02:02:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:05.143 02:02:13 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:05.143 02:02:13 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:05.143 02:02:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:05.143 02:02:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:05.144 02:02:13 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:05.144 02:02:13 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:05.144 02:02:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:05.144 02:02:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:05.144 02:02:13 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:05.144 02:02:13 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:05.144 02:02:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:05.144 02:02:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:05.144 02:02:13 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:05.144 02:02:13 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:05.144 02:02:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:05.144 02:02:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:05.144 02:02:13 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:05.144 02:02:13 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:05.144 02:02:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:05.144 02:02:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:05.144 02:02:13 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:05.144 02:02:13 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:05.144 02:02:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:05.144 02:02:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:05.144 02:02:13 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:05.144 02:02:13 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:05.144 02:02:13 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:05.144 00:07:05.144 real 0m2.447s 00:07:05.144 user 0m2.111s 00:07:05.144 sys 0m0.240s 00:07:05.144 02:02:13 accel.accel_decomp_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:05.144 02:02:13 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:07:05.144 ************************************ 00:07:05.144 END TEST accel_decomp_mthread 00:07:05.144 ************************************ 00:07:05.144 02:02:13 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:05.144 02:02:13 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:05.144 02:02:13 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:07:05.144 02:02:13 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:05.144 02:02:13 accel -- common/autotest_common.sh@10 -- # set +x 00:07:05.144 ************************************ 00:07:05.144 START TEST accel_decomp_full_mthread 00:07:05.144 ************************************ 00:07:05.144 02:02:13 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:05.144 02:02:13 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:07:05.144 02:02:13 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:07:05.144 02:02:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:05.144 02:02:13 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:05.144 02:02:13 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:05.144 02:02:13 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:05.144 02:02:13 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:07:05.144 02:02:13 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:05.144 02:02:13 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:05.144 02:02:13 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:05.144 02:02:13 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:05.144 02:02:13 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:05.144 02:02:13 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:07:05.144 02:02:13 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:07:05.144 [2024-07-23 02:02:13.669661] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:07:05.144 [2024-07-23 02:02:13.669822] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62878 ] 00:07:05.144 [2024-07-23 02:02:13.826075] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.402 [2024-07-23 02:02:14.033854] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.661 02:02:14 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:05.661 02:02:14 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:05.661 02:02:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:05.661 02:02:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:05.661 02:02:14 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:05.661 02:02:14 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:05.661 02:02:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:05.661 02:02:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:05.661 02:02:14 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:05.661 02:02:14 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:05.661 02:02:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:05.661 02:02:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:05.661 02:02:14 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:07:05.661 02:02:14 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:05.661 02:02:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:05.661 02:02:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:05.661 02:02:14 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:05.661 02:02:14 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:05.661 02:02:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:05.661 02:02:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:05.661 02:02:14 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:05.661 02:02:14 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:05.661 02:02:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:05.661 02:02:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:05.661 02:02:14 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:07:05.661 02:02:14 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:05.661 02:02:14 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:05.661 02:02:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:05.661 02:02:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:05.661 02:02:14 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:05.661 02:02:14 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:05.662 02:02:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:05.662 02:02:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:05.662 02:02:14 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:05.662 02:02:14 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:05.662 02:02:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:05.662 02:02:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:05.662 02:02:14 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:07:05.662 02:02:14 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:05.662 02:02:14 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:07:05.662 02:02:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:05.662 02:02:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:05.662 02:02:14 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:05.662 02:02:14 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:05.662 02:02:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:05.662 02:02:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:05.662 02:02:14 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:07:05.662 02:02:14 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:05.662 02:02:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:05.662 02:02:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:05.662 02:02:14 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:07:05.662 02:02:14 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:05.662 02:02:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:05.662 02:02:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:05.662 02:02:14 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:07:05.662 02:02:14 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:05.662 02:02:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:05.662 02:02:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:05.662 02:02:14 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:07:05.662 02:02:14 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:05.662 02:02:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:05.662 02:02:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:05.662 02:02:14 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:07:05.662 02:02:14 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:05.662 02:02:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:05.662 02:02:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:05.662 02:02:14 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:05.662 02:02:14 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:05.662 02:02:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:05.662 02:02:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:05.662 02:02:14 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:05.662 02:02:14 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:05.662 02:02:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:05.662 02:02:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:07.565 02:02:16 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:07.565 02:02:16 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:07.565 02:02:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:07.565 02:02:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:07.565 02:02:16 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:07.565 02:02:16 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:07.565 02:02:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:07.565 02:02:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:07.565 02:02:16 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:07.565 02:02:16 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:07.565 02:02:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:07.565 02:02:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:07.565 02:02:16 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:07.565 02:02:16 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:07.565 02:02:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:07.565 02:02:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:07.565 02:02:16 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:07.565 02:02:16 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:07.565 02:02:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:07.565 02:02:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:07.565 02:02:16 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:07.565 02:02:16 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:07.565 02:02:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:07.565 02:02:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:07.565 02:02:16 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:07.565 02:02:16 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:07.565 02:02:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:07.565 02:02:16 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:07.565 02:02:16 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:07.565 02:02:16 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:07.565 02:02:16 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:07.565 00:07:07.565 real 0m2.436s 00:07:07.565 user 0m2.134s 00:07:07.566 sys 0m0.206s 00:07:07.566 ************************************ 00:07:07.566 END TEST accel_decomp_full_mthread 00:07:07.566 ************************************ 00:07:07.566 02:02:16 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:07.566 02:02:16 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:07:07.566 02:02:16 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:07.566 02:02:16 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:07:07.566 02:02:16 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:07.566 02:02:16 accel -- accel/accel.sh@137 -- # build_accel_config 00:07:07.566 02:02:16 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:07.566 02:02:16 accel -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:07.566 02:02:16 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:07.566 02:02:16 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:07.566 02:02:16 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:07.566 02:02:16 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:07.566 02:02:16 accel -- common/autotest_common.sh@10 -- # set +x 00:07:07.566 02:02:16 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:07.566 02:02:16 accel -- accel/accel.sh@40 -- # local IFS=, 00:07:07.566 02:02:16 accel -- accel/accel.sh@41 -- # jq -r . 00:07:07.566 ************************************ 00:07:07.566 START TEST accel_dif_functional_tests 00:07:07.566 ************************************ 00:07:07.566 02:02:16 accel.accel_dif_functional_tests -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:07.566 [2024-07-23 02:02:16.254565] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:07:07.566 [2024-07-23 02:02:16.255670] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62925 ] 00:07:07.825 [2024-07-23 02:02:16.434346] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:08.084 [2024-07-23 02:02:16.698123] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:08.084 [2024-07-23 02:02:16.698224] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.084 [2024-07-23 02:02:16.698253] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:08.342 00:07:08.342 00:07:08.342 CUnit - A unit testing framework for C - Version 2.1-3 00:07:08.342 http://cunit.sourceforge.net/ 00:07:08.342 00:07:08.342 00:07:08.342 Suite: accel_dif 00:07:08.342 Test: verify: DIF generated, GUARD check ...passed 00:07:08.342 Test: verify: DIF generated, APPTAG check ...passed 00:07:08.342 Test: verify: DIF generated, REFTAG check ...passed 00:07:08.342 Test: verify: DIF not generated, GUARD check ...[2024-07-23 02:02:16.977776] dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:08.342 passed 00:07:08.342 Test: verify: DIF not generated, APPTAG check ...[2024-07-23 02:02:16.978234] dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:08.342 passed 00:07:08.342 Test: verify: DIF not generated, REFTAG check ...[2024-07-23 02:02:16.978644] dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:08.342 passed 00:07:08.342 Test: verify: APPTAG correct, APPTAG check ...passed 00:07:08.342 Test: verify: APPTAG incorrect, APPTAG check ...[2024-07-23 02:02:16.979467] dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:07:08.342 passed 00:07:08.342 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:07:08.342 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:07:08.342 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:07:08.342 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-23 02:02:16.980559] dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:07:08.342 passed 00:07:08.342 Test: verify copy: DIF generated, GUARD check ...passed 00:07:08.342 Test: verify copy: DIF generated, APPTAG check ...passed 00:07:08.342 Test: verify copy: DIF generated, REFTAG check ...passed 00:07:08.342 Test: verify copy: DIF not generated, GUARD check ...passed[2024-07-23 02:02:16.981239] dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:08.342 00:07:08.342 Test: verify copy: DIF not generated, APPTAG check ...[2024-07-23 02:02:16.981458] dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:08.342 passed 00:07:08.342 Test: verify copy: DIF not generated, REFTAG check ...[2024-07-23 02:02:16.981607] dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:08.342 passed 00:07:08.342 Test: generate copy: DIF generated, GUARD check ...passed 00:07:08.342 Test: generate copy: DIF generated, APTTAG check ...passed 00:07:08.342 Test: generate copy: DIF generated, REFTAG check ...passed 00:07:08.342 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:07:08.342 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:07:08.342 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:07:08.342 Test: generate copy: iovecs-len validate ...[2024-07-23 02:02:16.982556] dif.c:1225:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:07:08.342 passed 00:07:08.342 Test: generate copy: buffer alignment validate ...passed 00:07:08.342 00:07:08.342 Run Summary: Type Total Ran Passed Failed Inactive 00:07:08.342 suites 1 1 n/a 0 0 00:07:08.342 tests 26 26 26 0 0 00:07:08.342 asserts 115 115 115 0 n/a 00:07:08.342 00:07:08.342 Elapsed time = 0.012 seconds 00:07:09.277 00:07:09.277 real 0m1.839s 00:07:09.277 user 0m3.245s 00:07:09.277 sys 0m0.307s 00:07:09.277 ************************************ 00:07:09.277 END TEST accel_dif_functional_tests 00:07:09.277 ************************************ 00:07:09.277 02:02:17 accel.accel_dif_functional_tests -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:09.277 02:02:17 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:07:09.277 02:02:17 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:09.277 ************************************ 00:07:09.277 END TEST accel 00:07:09.277 ************************************ 00:07:09.277 00:07:09.277 real 0m56.484s 00:07:09.277 user 1m0.177s 00:07:09.277 sys 0m6.281s 00:07:09.277 02:02:17 accel -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:09.277 02:02:17 accel -- common/autotest_common.sh@10 -- # set +x 00:07:09.277 02:02:18 -- common/autotest_common.sh@1142 -- # return 0 00:07:09.277 02:02:18 -- spdk/autotest.sh@184 -- # run_test accel_rpc /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:07:09.277 02:02:18 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:09.277 02:02:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:09.277 02:02:18 -- common/autotest_common.sh@10 -- # set +x 00:07:09.277 ************************************ 00:07:09.277 START TEST accel_rpc 00:07:09.277 ************************************ 00:07:09.277 02:02:18 accel_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:07:09.535 * Looking for test storage... 00:07:09.535 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:07:09.535 02:02:18 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:09.535 02:02:18 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=63008 00:07:09.535 02:02:18 accel_rpc -- accel/accel_rpc.sh@13 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:07:09.536 02:02:18 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 63008 00:07:09.536 02:02:18 accel_rpc -- common/autotest_common.sh@829 -- # '[' -z 63008 ']' 00:07:09.536 02:02:18 accel_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:09.536 02:02:18 accel_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:09.536 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:09.536 02:02:18 accel_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:09.536 02:02:18 accel_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:09.536 02:02:18 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:09.536 [2024-07-23 02:02:18.297399] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:07:09.536 [2024-07-23 02:02:18.297647] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63008 ] 00:07:09.797 [2024-07-23 02:02:18.469422] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.055 [2024-07-23 02:02:18.680247] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.314 02:02:19 accel_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:10.314 02:02:19 accel_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:10.314 02:02:19 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:07:10.314 02:02:19 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:07:10.314 02:02:19 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:07:10.314 02:02:19 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:07:10.314 02:02:19 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:07:10.314 02:02:19 accel_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:10.314 02:02:19 accel_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:10.314 02:02:19 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:10.314 ************************************ 00:07:10.314 START TEST accel_assign_opcode 00:07:10.314 ************************************ 00:07:10.314 02:02:19 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1123 -- # accel_assign_opcode_test_suite 00:07:10.314 02:02:19 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:07:10.314 02:02:19 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:10.314 02:02:19 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:10.314 [2024-07-23 02:02:19.089093] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:07:10.573 02:02:19 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:10.573 02:02:19 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:07:10.573 02:02:19 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:10.573 02:02:19 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:10.573 [2024-07-23 02:02:19.101033] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:07:10.573 02:02:19 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:10.573 02:02:19 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:07:10.573 02:02:19 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:10.573 02:02:19 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:11.141 02:02:19 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:11.141 02:02:19 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:07:11.141 02:02:19 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:11.141 02:02:19 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:07:11.141 02:02:19 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:11.141 02:02:19 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:07:11.141 02:02:19 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:11.141 software 00:07:11.141 ************************************ 00:07:11.141 END TEST accel_assign_opcode 00:07:11.141 ************************************ 00:07:11.141 00:07:11.141 real 0m0.724s 00:07:11.141 user 0m0.056s 00:07:11.141 sys 0m0.011s 00:07:11.141 02:02:19 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:11.141 02:02:19 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:11.141 02:02:19 accel_rpc -- common/autotest_common.sh@1142 -- # return 0 00:07:11.141 02:02:19 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 63008 00:07:11.141 02:02:19 accel_rpc -- common/autotest_common.sh@948 -- # '[' -z 63008 ']' 00:07:11.141 02:02:19 accel_rpc -- common/autotest_common.sh@952 -- # kill -0 63008 00:07:11.142 02:02:19 accel_rpc -- common/autotest_common.sh@953 -- # uname 00:07:11.142 02:02:19 accel_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:11.142 02:02:19 accel_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 63008 00:07:11.142 killing process with pid 63008 00:07:11.142 02:02:19 accel_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:11.142 02:02:19 accel_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:11.142 02:02:19 accel_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 63008' 00:07:11.142 02:02:19 accel_rpc -- common/autotest_common.sh@967 -- # kill 63008 00:07:11.142 02:02:19 accel_rpc -- common/autotest_common.sh@972 -- # wait 63008 00:07:13.047 ************************************ 00:07:13.047 END TEST accel_rpc 00:07:13.047 ************************************ 00:07:13.047 00:07:13.047 real 0m3.640s 00:07:13.047 user 0m3.507s 00:07:13.047 sys 0m0.581s 00:07:13.047 02:02:21 accel_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:13.047 02:02:21 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:13.047 02:02:21 -- common/autotest_common.sh@1142 -- # return 0 00:07:13.047 02:02:21 -- spdk/autotest.sh@185 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:13.047 02:02:21 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:13.047 02:02:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:13.047 02:02:21 -- common/autotest_common.sh@10 -- # set +x 00:07:13.047 ************************************ 00:07:13.047 START TEST app_cmdline 00:07:13.047 ************************************ 00:07:13.047 02:02:21 app_cmdline -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:13.047 * Looking for test storage... 00:07:13.306 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:13.306 02:02:21 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:13.306 02:02:21 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=63121 00:07:13.306 02:02:21 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 63121 00:07:13.306 02:02:21 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:13.306 02:02:21 app_cmdline -- common/autotest_common.sh@829 -- # '[' -z 63121 ']' 00:07:13.306 02:02:21 app_cmdline -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:13.306 02:02:21 app_cmdline -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:13.306 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:13.306 02:02:21 app_cmdline -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:13.306 02:02:21 app_cmdline -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:13.306 02:02:21 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:13.306 [2024-07-23 02:02:21.947134] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:07:13.306 [2024-07-23 02:02:21.947282] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63121 ] 00:07:13.564 [2024-07-23 02:02:22.100191] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.564 [2024-07-23 02:02:22.294947] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.501 02:02:22 app_cmdline -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:14.501 02:02:22 app_cmdline -- common/autotest_common.sh@862 -- # return 0 00:07:14.501 02:02:22 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:07:14.501 { 00:07:14.501 "version": "SPDK v24.09-pre git sha1 f7b31b2b9", 00:07:14.501 "fields": { 00:07:14.501 "major": 24, 00:07:14.501 "minor": 9, 00:07:14.501 "patch": 0, 00:07:14.501 "suffix": "-pre", 00:07:14.501 "commit": "f7b31b2b9" 00:07:14.501 } 00:07:14.501 } 00:07:14.501 02:02:23 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:14.501 02:02:23 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:14.501 02:02:23 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:14.501 02:02:23 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:14.501 02:02:23 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:14.501 02:02:23 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:14.501 02:02:23 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:14.501 02:02:23 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:14.501 02:02:23 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:14.501 02:02:23 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:14.501 02:02:23 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:14.501 02:02:23 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:14.501 02:02:23 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:14.501 02:02:23 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:07:14.501 02:02:23 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:14.501 02:02:23 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:14.501 02:02:23 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:14.501 02:02:23 app_cmdline -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:14.501 02:02:23 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:14.501 02:02:23 app_cmdline -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:14.501 02:02:23 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:14.501 02:02:23 app_cmdline -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:14.501 02:02:23 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:07:14.501 02:02:23 app_cmdline -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:14.759 request: 00:07:14.759 { 00:07:14.759 "method": "env_dpdk_get_mem_stats", 00:07:14.759 "req_id": 1 00:07:14.759 } 00:07:14.759 Got JSON-RPC error response 00:07:14.759 response: 00:07:14.759 { 00:07:14.759 "code": -32601, 00:07:14.759 "message": "Method not found" 00:07:14.759 } 00:07:14.759 02:02:23 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:07:14.759 02:02:23 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:14.759 02:02:23 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:14.759 02:02:23 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:14.759 02:02:23 app_cmdline -- app/cmdline.sh@1 -- # killprocess 63121 00:07:14.759 02:02:23 app_cmdline -- common/autotest_common.sh@948 -- # '[' -z 63121 ']' 00:07:14.759 02:02:23 app_cmdline -- common/autotest_common.sh@952 -- # kill -0 63121 00:07:14.759 02:02:23 app_cmdline -- common/autotest_common.sh@953 -- # uname 00:07:14.759 02:02:23 app_cmdline -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:14.759 02:02:23 app_cmdline -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 63121 00:07:14.759 02:02:23 app_cmdline -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:14.759 killing process with pid 63121 00:07:14.759 02:02:23 app_cmdline -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:14.759 02:02:23 app_cmdline -- common/autotest_common.sh@966 -- # echo 'killing process with pid 63121' 00:07:14.759 02:02:23 app_cmdline -- common/autotest_common.sh@967 -- # kill 63121 00:07:14.759 02:02:23 app_cmdline -- common/autotest_common.sh@972 -- # wait 63121 00:07:16.691 00:07:16.691 real 0m3.603s 00:07:16.691 user 0m3.947s 00:07:16.691 sys 0m0.593s 00:07:16.691 02:02:25 app_cmdline -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:16.691 02:02:25 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:16.691 ************************************ 00:07:16.691 END TEST app_cmdline 00:07:16.691 ************************************ 00:07:16.691 02:02:25 -- common/autotest_common.sh@1142 -- # return 0 00:07:16.691 02:02:25 -- spdk/autotest.sh@186 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:16.691 02:02:25 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:16.691 02:02:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:16.691 02:02:25 -- common/autotest_common.sh@10 -- # set +x 00:07:16.691 ************************************ 00:07:16.691 START TEST version 00:07:16.691 ************************************ 00:07:16.691 02:02:25 version -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:16.950 * Looking for test storage... 00:07:16.950 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:16.950 02:02:25 version -- app/version.sh@17 -- # get_header_version major 00:07:16.950 02:02:25 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:16.950 02:02:25 version -- app/version.sh@14 -- # tr -d '"' 00:07:16.950 02:02:25 version -- app/version.sh@14 -- # cut -f2 00:07:16.950 02:02:25 version -- app/version.sh@17 -- # major=24 00:07:16.950 02:02:25 version -- app/version.sh@18 -- # get_header_version minor 00:07:16.950 02:02:25 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:16.950 02:02:25 version -- app/version.sh@14 -- # cut -f2 00:07:16.950 02:02:25 version -- app/version.sh@14 -- # tr -d '"' 00:07:16.950 02:02:25 version -- app/version.sh@18 -- # minor=9 00:07:16.950 02:02:25 version -- app/version.sh@19 -- # get_header_version patch 00:07:16.950 02:02:25 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:16.950 02:02:25 version -- app/version.sh@14 -- # cut -f2 00:07:16.950 02:02:25 version -- app/version.sh@14 -- # tr -d '"' 00:07:16.950 02:02:25 version -- app/version.sh@19 -- # patch=0 00:07:16.950 02:02:25 version -- app/version.sh@20 -- # get_header_version suffix 00:07:16.950 02:02:25 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:16.950 02:02:25 version -- app/version.sh@14 -- # cut -f2 00:07:16.950 02:02:25 version -- app/version.sh@14 -- # tr -d '"' 00:07:16.950 02:02:25 version -- app/version.sh@20 -- # suffix=-pre 00:07:16.950 02:02:25 version -- app/version.sh@22 -- # version=24.9 00:07:16.950 02:02:25 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:16.950 02:02:25 version -- app/version.sh@28 -- # version=24.9rc0 00:07:16.950 02:02:25 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:07:16.950 02:02:25 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:16.950 02:02:25 version -- app/version.sh@30 -- # py_version=24.9rc0 00:07:16.950 02:02:25 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:07:16.950 00:07:16.950 real 0m0.155s 00:07:16.950 user 0m0.088s 00:07:16.950 sys 0m0.099s 00:07:16.950 02:02:25 version -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:16.950 02:02:25 version -- common/autotest_common.sh@10 -- # set +x 00:07:16.950 ************************************ 00:07:16.950 END TEST version 00:07:16.950 ************************************ 00:07:16.950 02:02:25 -- common/autotest_common.sh@1142 -- # return 0 00:07:16.950 02:02:25 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:07:16.950 02:02:25 -- spdk/autotest.sh@198 -- # uname -s 00:07:16.950 02:02:25 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:07:16.950 02:02:25 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:07:16.950 02:02:25 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:07:16.950 02:02:25 -- spdk/autotest.sh@211 -- # '[' 0 -eq 1 ']' 00:07:16.950 02:02:25 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:07:16.950 02:02:25 -- spdk/autotest.sh@260 -- # timing_exit lib 00:07:16.950 02:02:25 -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:16.950 02:02:25 -- common/autotest_common.sh@10 -- # set +x 00:07:16.950 02:02:25 -- spdk/autotest.sh@262 -- # '[' 1 -eq 1 ']' 00:07:16.950 02:02:25 -- spdk/autotest.sh@263 -- # run_test iscsi_tgt /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/iscsi_tgt.sh 00:07:16.950 02:02:25 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:16.950 02:02:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:16.950 02:02:25 -- common/autotest_common.sh@10 -- # set +x 00:07:16.950 ************************************ 00:07:16.950 START TEST iscsi_tgt 00:07:16.950 ************************************ 00:07:16.950 02:02:25 iscsi_tgt -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/iscsi_tgt.sh 00:07:16.950 * Looking for test storage... 00:07:17.209 * Found test storage at /home/vagrant/spdk_repo/spdk/test/iscsi_tgt 00:07:17.209 02:02:25 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@10 -- # uname -s 00:07:17.209 02:02:25 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:17.209 02:02:25 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:07:17.209 02:02:25 iscsi_tgt -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:07:17.209 02:02:25 iscsi_tgt -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:07:17.209 02:02:25 iscsi_tgt -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:07:17.209 02:02:25 iscsi_tgt -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:07:17.209 02:02:25 iscsi_tgt -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:07:17.209 02:02:25 iscsi_tgt -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:07:17.209 02:02:25 iscsi_tgt -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:07:17.209 02:02:25 iscsi_tgt -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:07:17.209 02:02:25 iscsi_tgt -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:07:17.209 02:02:25 iscsi_tgt -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:07:17.209 02:02:25 iscsi_tgt -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:07:17.209 02:02:25 iscsi_tgt -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:07:17.209 02:02:25 iscsi_tgt -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:07:17.209 02:02:25 iscsi_tgt -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:07:17.209 02:02:25 iscsi_tgt -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:07:17.209 02:02:25 iscsi_tgt -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:07:17.209 02:02:25 iscsi_tgt -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:07:17.209 02:02:25 iscsi_tgt -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:07:17.209 02:02:25 iscsi_tgt -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:07:17.209 02:02:25 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@18 -- # iscsicleanup 00:07:17.209 Cleaning up iSCSI connection 00:07:17.209 02:02:25 iscsi_tgt -- common/autotest_common.sh@980 -- # echo 'Cleaning up iSCSI connection' 00:07:17.209 02:02:25 iscsi_tgt -- common/autotest_common.sh@981 -- # iscsiadm -m node --logout 00:07:17.209 iscsiadm: No matching sessions found 00:07:17.209 02:02:25 iscsi_tgt -- common/autotest_common.sh@981 -- # true 00:07:17.209 02:02:25 iscsi_tgt -- common/autotest_common.sh@982 -- # iscsiadm -m node -o delete 00:07:17.209 iscsiadm: No records found 00:07:17.209 02:02:25 iscsi_tgt -- common/autotest_common.sh@982 -- # true 00:07:17.209 02:02:25 iscsi_tgt -- common/autotest_common.sh@983 -- # rm -rf 00:07:17.209 02:02:25 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@21 -- # create_veth_interfaces 00:07:17.209 02:02:25 iscsi_tgt -- iscsi_tgt/common.sh@32 -- # ip link set init_br nomaster 00:07:17.209 Cannot find device "init_br" 00:07:17.209 02:02:25 iscsi_tgt -- iscsi_tgt/common.sh@32 -- # true 00:07:17.209 02:02:25 iscsi_tgt -- iscsi_tgt/common.sh@33 -- # ip link set tgt_br nomaster 00:07:17.209 Cannot find device "tgt_br" 00:07:17.209 02:02:25 iscsi_tgt -- iscsi_tgt/common.sh@33 -- # true 00:07:17.209 02:02:25 iscsi_tgt -- iscsi_tgt/common.sh@34 -- # ip link set tgt_br2 nomaster 00:07:17.209 Cannot find device "tgt_br2" 00:07:17.209 02:02:25 iscsi_tgt -- iscsi_tgt/common.sh@34 -- # true 00:07:17.209 02:02:25 iscsi_tgt -- iscsi_tgt/common.sh@35 -- # ip link set init_br down 00:07:17.209 Cannot find device "init_br" 00:07:17.209 02:02:25 iscsi_tgt -- iscsi_tgt/common.sh@35 -- # true 00:07:17.209 02:02:25 iscsi_tgt -- iscsi_tgt/common.sh@36 -- # ip link set tgt_br down 00:07:17.209 Cannot find device "tgt_br" 00:07:17.209 02:02:25 iscsi_tgt -- iscsi_tgt/common.sh@36 -- # true 00:07:17.209 02:02:25 iscsi_tgt -- iscsi_tgt/common.sh@37 -- # ip link set tgt_br2 down 00:07:17.209 Cannot find device "tgt_br2" 00:07:17.209 02:02:25 iscsi_tgt -- iscsi_tgt/common.sh@37 -- # true 00:07:17.209 02:02:25 iscsi_tgt -- iscsi_tgt/common.sh@38 -- # ip link delete iscsi_br type bridge 00:07:17.209 Cannot find device "iscsi_br" 00:07:17.209 02:02:25 iscsi_tgt -- iscsi_tgt/common.sh@38 -- # true 00:07:17.209 02:02:25 iscsi_tgt -- iscsi_tgt/common.sh@39 -- # ip link delete spdk_init_int 00:07:17.209 Cannot find device "spdk_init_int" 00:07:17.209 02:02:25 iscsi_tgt -- iscsi_tgt/common.sh@39 -- # true 00:07:17.209 02:02:25 iscsi_tgt -- iscsi_tgt/common.sh@40 -- # ip netns exec spdk_iscsi_ns ip link delete spdk_tgt_int 00:07:17.209 Cannot open network namespace "spdk_iscsi_ns": No such file or directory 00:07:17.209 02:02:25 iscsi_tgt -- iscsi_tgt/common.sh@40 -- # true 00:07:17.209 02:02:25 iscsi_tgt -- iscsi_tgt/common.sh@41 -- # ip netns exec spdk_iscsi_ns ip link delete spdk_tgt_int2 00:07:17.209 Cannot open network namespace "spdk_iscsi_ns": No such file or directory 00:07:17.209 02:02:25 iscsi_tgt -- iscsi_tgt/common.sh@41 -- # true 00:07:17.209 02:02:25 iscsi_tgt -- iscsi_tgt/common.sh@42 -- # ip netns del spdk_iscsi_ns 00:07:17.209 Cannot remove namespace file "/var/run/netns/spdk_iscsi_ns": No such file or directory 00:07:17.209 02:02:25 iscsi_tgt -- iscsi_tgt/common.sh@42 -- # true 00:07:17.209 02:02:25 iscsi_tgt -- iscsi_tgt/common.sh@44 -- # trap 'cleanup_veth_interfaces; exit 1' SIGINT SIGTERM EXIT 00:07:17.209 02:02:25 iscsi_tgt -- iscsi_tgt/common.sh@47 -- # ip netns add spdk_iscsi_ns 00:07:17.209 02:02:25 iscsi_tgt -- iscsi_tgt/common.sh@50 -- # ip link add spdk_init_int type veth peer name init_br 00:07:17.209 02:02:25 iscsi_tgt -- iscsi_tgt/common.sh@51 -- # ip link add spdk_tgt_int type veth peer name tgt_br 00:07:17.209 02:02:25 iscsi_tgt -- iscsi_tgt/common.sh@52 -- # ip link add spdk_tgt_int2 type veth peer name tgt_br2 00:07:17.209 02:02:25 iscsi_tgt -- iscsi_tgt/common.sh@55 -- # ip link set spdk_tgt_int netns spdk_iscsi_ns 00:07:17.209 02:02:25 iscsi_tgt -- iscsi_tgt/common.sh@56 -- # ip link set spdk_tgt_int2 netns spdk_iscsi_ns 00:07:17.209 02:02:25 iscsi_tgt -- iscsi_tgt/common.sh@59 -- # ip addr add 10.0.0.2/24 dev spdk_init_int 00:07:17.209 02:02:25 iscsi_tgt -- iscsi_tgt/common.sh@60 -- # ip netns exec spdk_iscsi_ns ip addr add 10.0.0.1/24 dev spdk_tgt_int 00:07:17.209 02:02:25 iscsi_tgt -- iscsi_tgt/common.sh@61 -- # ip netns exec spdk_iscsi_ns ip addr add 10.0.0.3/24 dev spdk_tgt_int2 00:07:17.209 02:02:25 iscsi_tgt -- iscsi_tgt/common.sh@64 -- # ip link set spdk_init_int up 00:07:17.209 02:02:25 iscsi_tgt -- iscsi_tgt/common.sh@65 -- # ip link set init_br up 00:07:17.210 02:02:25 iscsi_tgt -- iscsi_tgt/common.sh@66 -- # ip link set tgt_br up 00:07:17.210 02:02:25 iscsi_tgt -- iscsi_tgt/common.sh@67 -- # ip link set tgt_br2 up 00:07:17.210 02:02:25 iscsi_tgt -- iscsi_tgt/common.sh@68 -- # ip netns exec spdk_iscsi_ns ip link set spdk_tgt_int up 00:07:17.210 02:02:25 iscsi_tgt -- iscsi_tgt/common.sh@69 -- # ip netns exec spdk_iscsi_ns ip link set spdk_tgt_int2 up 00:07:17.210 02:02:25 iscsi_tgt -- iscsi_tgt/common.sh@70 -- # ip netns exec spdk_iscsi_ns ip link set lo up 00:07:17.210 02:02:25 iscsi_tgt -- iscsi_tgt/common.sh@73 -- # ip link add iscsi_br type bridge 00:07:17.468 02:02:26 iscsi_tgt -- iscsi_tgt/common.sh@74 -- # ip link set iscsi_br up 00:07:17.468 02:02:26 iscsi_tgt -- iscsi_tgt/common.sh@77 -- # ip link set init_br master iscsi_br 00:07:17.468 02:02:26 iscsi_tgt -- iscsi_tgt/common.sh@78 -- # ip link set tgt_br master iscsi_br 00:07:17.468 02:02:26 iscsi_tgt -- iscsi_tgt/common.sh@79 -- # ip link set tgt_br2 master iscsi_br 00:07:17.468 02:02:26 iscsi_tgt -- iscsi_tgt/common.sh@82 -- # iptables -I INPUT 1 -i spdk_init_int -p tcp --dport 3260 -j ACCEPT 00:07:17.468 02:02:26 iscsi_tgt -- iscsi_tgt/common.sh@83 -- # iptables -A FORWARD -i iscsi_br -o iscsi_br -j ACCEPT 00:07:17.468 02:02:26 iscsi_tgt -- iscsi_tgt/common.sh@86 -- # ping -c 1 10.0.0.1 00:07:17.468 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:17.468 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.157 ms 00:07:17.468 00:07:17.468 --- 10.0.0.1 ping statistics --- 00:07:17.468 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:17.468 rtt min/avg/max/mdev = 0.157/0.157/0.157/0.000 ms 00:07:17.468 02:02:26 iscsi_tgt -- iscsi_tgt/common.sh@87 -- # ping -c 1 10.0.0.3 00:07:17.468 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:17.468 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.052 ms 00:07:17.468 00:07:17.468 --- 10.0.0.3 ping statistics --- 00:07:17.468 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:17.468 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:07:17.468 02:02:26 iscsi_tgt -- iscsi_tgt/common.sh@88 -- # ip netns exec spdk_iscsi_ns ping -c 1 10.0.0.2 00:07:17.468 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:17.468 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.030 ms 00:07:17.468 00:07:17.468 --- 10.0.0.2 ping statistics --- 00:07:17.468 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:17.468 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:07:17.468 02:02:26 iscsi_tgt -- iscsi_tgt/common.sh@89 -- # ip netns exec spdk_iscsi_ns ping -c 1 10.0.0.2 00:07:17.468 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:17.468 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.018 ms 00:07:17.468 00:07:17.468 --- 10.0.0.2 ping statistics --- 00:07:17.468 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:17.468 rtt min/avg/max/mdev = 0.018/0.018/0.018/0.000 ms 00:07:17.468 02:02:26 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@23 -- # trap 'cleanup_veth_interfaces; exit 1' SIGINT SIGTERM EXIT 00:07:17.468 02:02:26 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@25 -- # run_test iscsi_tgt_sock /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/sock/sock.sh 00:07:17.468 02:02:26 iscsi_tgt -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:17.468 02:02:26 iscsi_tgt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:17.468 02:02:26 iscsi_tgt -- common/autotest_common.sh@10 -- # set +x 00:07:17.468 ************************************ 00:07:17.468 START TEST iscsi_tgt_sock 00:07:17.468 ************************************ 00:07:17.468 02:02:26 iscsi_tgt.iscsi_tgt_sock -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/sock/sock.sh 00:07:17.468 * Looking for test storage... 00:07:17.468 * Found test storage at /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/sock 00:07:17.468 02:02:26 iscsi_tgt.iscsi_tgt_sock -- sock/sock.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:07:17.468 02:02:26 iscsi_tgt.iscsi_tgt_sock -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:07:17.468 02:02:26 iscsi_tgt.iscsi_tgt_sock -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:07:17.469 02:02:26 iscsi_tgt.iscsi_tgt_sock -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:07:17.469 02:02:26 iscsi_tgt.iscsi_tgt_sock -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:07:17.469 02:02:26 iscsi_tgt.iscsi_tgt_sock -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:07:17.469 02:02:26 iscsi_tgt.iscsi_tgt_sock -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:07:17.469 02:02:26 iscsi_tgt.iscsi_tgt_sock -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:07:17.469 02:02:26 iscsi_tgt.iscsi_tgt_sock -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:07:17.469 02:02:26 iscsi_tgt.iscsi_tgt_sock -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:07:17.469 02:02:26 iscsi_tgt.iscsi_tgt_sock -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:07:17.469 02:02:26 iscsi_tgt.iscsi_tgt_sock -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:07:17.469 02:02:26 iscsi_tgt.iscsi_tgt_sock -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:07:17.469 02:02:26 iscsi_tgt.iscsi_tgt_sock -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:07:17.469 02:02:26 iscsi_tgt.iscsi_tgt_sock -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:07:17.469 02:02:26 iscsi_tgt.iscsi_tgt_sock -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:07:17.469 02:02:26 iscsi_tgt.iscsi_tgt_sock -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:07:17.469 02:02:26 iscsi_tgt.iscsi_tgt_sock -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:07:17.469 02:02:26 iscsi_tgt.iscsi_tgt_sock -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:07:17.469 02:02:26 iscsi_tgt.iscsi_tgt_sock -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:07:17.469 02:02:26 iscsi_tgt.iscsi_tgt_sock -- sock/sock.sh@48 -- # iscsitestinit 00:07:17.469 02:02:26 iscsi_tgt.iscsi_tgt_sock -- iscsi_tgt/common.sh@109 -- # '[' '' == iso ']' 00:07:17.469 02:02:26 iscsi_tgt.iscsi_tgt_sock -- sock/sock.sh@50 -- # HELLO_SOCK_APP='ip netns exec spdk_iscsi_ns /home/vagrant/spdk_repo/spdk/build/examples/hello_sock' 00:07:17.469 02:02:26 iscsi_tgt.iscsi_tgt_sock -- sock/sock.sh@51 -- # SOCAT_APP=socat 00:07:17.469 02:02:26 iscsi_tgt.iscsi_tgt_sock -- sock/sock.sh@52 -- # OPENSSL_APP=openssl 00:07:17.469 02:02:26 iscsi_tgt.iscsi_tgt_sock -- sock/sock.sh@53 -- # PSK='-N ssl -E 1234567890ABCDEF -I psk.spdk.io' 00:07:17.469 02:02:26 iscsi_tgt.iscsi_tgt_sock -- sock/sock.sh@58 -- # timing_enter sock_client 00:07:17.469 02:02:26 iscsi_tgt.iscsi_tgt_sock -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:17.469 02:02:26 iscsi_tgt.iscsi_tgt_sock -- common/autotest_common.sh@10 -- # set +x 00:07:17.469 Testing client path 00:07:17.469 02:02:26 iscsi_tgt.iscsi_tgt_sock -- sock/sock.sh@59 -- # echo 'Testing client path' 00:07:17.469 02:02:26 iscsi_tgt.iscsi_tgt_sock -- sock/sock.sh@63 -- # server_pid=63456 00:07:17.469 02:02:26 iscsi_tgt.iscsi_tgt_sock -- sock/sock.sh@64 -- # trap 'killprocess $server_pid;iscsitestfini; exit 1' SIGINT SIGTERM EXIT 00:07:17.469 02:02:26 iscsi_tgt.iscsi_tgt_sock -- sock/sock.sh@66 -- # waitfortcp 63456 10.0.0.2:3260 00:07:17.469 02:02:26 iscsi_tgt.iscsi_tgt_sock -- sock/sock.sh@62 -- # socat tcp-l:3260,fork,bind=10.0.0.2 exec:/bin/cat 00:07:17.469 Waiting for process to start up and listen on address 10.0.0.2:3260... 00:07:17.469 02:02:26 iscsi_tgt.iscsi_tgt_sock -- sock/sock.sh@25 -- # local addr=10.0.0.2:3260 00:07:17.469 02:02:26 iscsi_tgt.iscsi_tgt_sock -- sock/sock.sh@27 -- # echo 'Waiting for process to start up and listen on address 10.0.0.2:3260...' 00:07:17.469 02:02:26 iscsi_tgt.iscsi_tgt_sock -- sock/sock.sh@29 -- # xtrace_disable 00:07:17.469 02:02:26 iscsi_tgt.iscsi_tgt_sock -- common/autotest_common.sh@10 -- # set +x 00:07:18.037 [2024-07-23 02:02:26.793254] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:07:18.037 [2024-07-23 02:02:26.793438] [ DPDK EAL parameters: hello_sock --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63471 ] 00:07:18.295 [2024-07-23 02:02:26.977161] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:18.555 [2024-07-23 02:02:27.248758] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.555 [2024-07-23 02:02:27.248849] hello_sock.c: 514:hello_start: *NOTICE*: Successfully started the application 00:07:18.555 [2024-07-23 02:02:27.248888] hello_sock.c: 304:hello_sock_connect: *NOTICE*: Connecting to the server on 10.0.0.2:3260 with sock_impl(posix) 00:07:18.555 [2024-07-23 02:02:27.249100] hello_sock.c: 319:hello_sock_connect: *NOTICE*: Connection accepted from (10.0.0.2, 3260) to (10.0.0.1, 40716) 00:07:18.555 [2024-07-23 02:02:27.249292] hello_sock.c: 251:hello_sock_writev_poll: *NOTICE*: Closing connection... 00:07:19.491 [2024-07-23 02:02:28.249328] hello_sock.c: 162:hello_sock_close_timeout_poll: *NOTICE*: Connection closed 00:07:19.491 [2024-07-23 02:02:28.249459] hello_sock.c: 594:main: *NOTICE*: Exiting from application 00:07:20.060 [2024-07-23 02:02:28.623325] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:07:20.060 [2024-07-23 02:02:28.623532] [ DPDK EAL parameters: hello_sock --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63491 ] 00:07:20.060 [2024-07-23 02:02:28.800988] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:20.319 [2024-07-23 02:02:28.990957] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.319 [2024-07-23 02:02:28.991057] hello_sock.c: 514:hello_start: *NOTICE*: Successfully started the application 00:07:20.319 [2024-07-23 02:02:28.991096] hello_sock.c: 304:hello_sock_connect: *NOTICE*: Connecting to the server on 10.0.0.2:3260 with sock_impl(posix) 00:07:20.319 [2024-07-23 02:02:28.991273] hello_sock.c: 319:hello_sock_connect: *NOTICE*: Connection accepted from (10.0.0.2, 3260) to (10.0.0.1, 55326) 00:07:20.319 [2024-07-23 02:02:28.991399] hello_sock.c: 251:hello_sock_writev_poll: *NOTICE*: Closing connection... 00:07:21.254 [2024-07-23 02:02:29.991427] hello_sock.c: 162:hello_sock_close_timeout_poll: *NOTICE*: Connection closed 00:07:21.254 [2024-07-23 02:02:29.991576] hello_sock.c: 594:main: *NOTICE*: Exiting from application 00:07:21.822 [2024-07-23 02:02:30.358725] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:07:21.822 [2024-07-23 02:02:30.358900] [ DPDK EAL parameters: hello_sock --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63522 ] 00:07:21.822 [2024-07-23 02:02:30.532850] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:22.093 [2024-07-23 02:02:30.741877] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.093 [2024-07-23 02:02:30.741977] hello_sock.c: 514:hello_start: *NOTICE*: Successfully started the application 00:07:22.093 [2024-07-23 02:02:30.742018] hello_sock.c: 304:hello_sock_connect: *NOTICE*: Connecting to the server on 10.0.0.2:3260 with sock_impl(posix) 00:07:22.093 [2024-07-23 02:02:30.742333] hello_sock.c: 319:hello_sock_connect: *NOTICE*: Connection accepted from (10.0.0.2, 3260) to (10.0.0.1, 55330) 00:07:22.093 [2024-07-23 02:02:30.742430] hello_sock.c: 251:hello_sock_writev_poll: *NOTICE*: Closing connection... 00:07:23.029 [2024-07-23 02:02:31.742464] hello_sock.c: 162:hello_sock_close_timeout_poll: *NOTICE*: Connection closed 00:07:23.029 [2024-07-23 02:02:31.742614] hello_sock.c: 594:main: *NOTICE*: Exiting from application 00:07:23.597 killing process with pid 63456 00:07:23.597 Testing SSL server path 00:07:23.597 Waiting for process to start up and listen on address 10.0.0.1:3260... 00:07:23.597 [2024-07-23 02:02:32.183454] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:07:23.597 [2024-07-23 02:02:32.183611] [ DPDK EAL parameters: hello_sock --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63572 ] 00:07:23.597 [2024-07-23 02:02:32.339947] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.856 [2024-07-23 02:02:32.529379] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.856 [2024-07-23 02:02:32.529538] hello_sock.c: 514:hello_start: *NOTICE*: Successfully started the application 00:07:23.856 [2024-07-23 02:02:32.529642] hello_sock.c: 472:hello_sock_listen: *NOTICE*: Listening connection on 10.0.0.1:3260 with sock_impl(ssl) 00:07:24.115 [2024-07-23 02:02:32.694755] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:07:24.115 [2024-07-23 02:02:32.694864] [ DPDK EAL parameters: hello_sock --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63577 ] 00:07:24.115 [2024-07-23 02:02:32.843704] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:24.373 [2024-07-23 02:02:33.055888] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:24.373 [2024-07-23 02:02:33.055990] hello_sock.c: 514:hello_start: *NOTICE*: Successfully started the application 00:07:24.373 [2024-07-23 02:02:33.056033] hello_sock.c: 304:hello_sock_connect: *NOTICE*: Connecting to the server on 10.0.0.1:3260 with sock_impl(ssl) 00:07:24.373 [2024-07-23 02:02:33.061195] hello_sock.c: 407:hello_sock_accept_poll: *NOTICE*: Accepting a new connection from (10.0.0.1, 36010) to (10.0.0.1, 3260) 00:07:24.373 [2024-07-23 02:02:33.061284] hello_sock.c: 319:hello_sock_connect: *NOTICE*: Connection accepted from (10.0.0.1, 3260) to (10.0.0.1, 36010) 00:07:24.373 [2024-07-23 02:02:33.064682] hello_sock.c: 251:hello_sock_writev_poll: *NOTICE*: Closing connection... 00:07:25.307 [2024-07-23 02:02:34.064743] hello_sock.c: 162:hello_sock_close_timeout_poll: *NOTICE*: Connection closed 00:07:25.307 [2024-07-23 02:02:34.064845] hello_sock.c: 377:hello_sock_cb: *NOTICE*: Connection closed 00:07:25.307 [2024-07-23 02:02:34.064955] hello_sock.c: 594:main: *NOTICE*: Exiting from application 00:07:25.875 [2024-07-23 02:02:34.457347] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:07:25.875 [2024-07-23 02:02:34.457527] [ DPDK EAL parameters: hello_sock --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63606 ] 00:07:25.875 [2024-07-23 02:02:34.625566] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:26.133 [2024-07-23 02:02:34.833951] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:26.133 [2024-07-23 02:02:34.834063] hello_sock.c: 514:hello_start: *NOTICE*: Successfully started the application 00:07:26.133 [2024-07-23 02:02:34.834107] hello_sock.c: 304:hello_sock_connect: *NOTICE*: Connecting to the server on 10.0.0.1:3260 with sock_impl(ssl) 00:07:26.133 [2024-07-23 02:02:34.836175] hello_sock.c: 407:hello_sock_accept_poll: *NOTICE*: Accepting a new connection from (10.0.0.1, 36026) to (10.0.0.1, 3260) 00:07:26.133 [2024-07-23 02:02:34.839303] hello_sock.c: 319:hello_sock_connect: *NOTICE*: Connection accepted from (10.0.0.1, 3260) to (10.0.0.1, 36026) 00:07:26.133 [2024-07-23 02:02:34.842098] hello_sock.c: 251:hello_sock_writev_poll: *NOTICE*: Closing connection... 00:07:27.069 [2024-07-23 02:02:35.842148] hello_sock.c: 162:hello_sock_close_timeout_poll: *NOTICE*: Connection closed 00:07:27.069 [2024-07-23 02:02:35.842245] hello_sock.c: 377:hello_sock_cb: *NOTICE*: Connection closed 00:07:27.069 [2024-07-23 02:02:35.842347] hello_sock.c: 594:main: *NOTICE*: Exiting from application 00:07:27.640 [2024-07-23 02:02:36.240072] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:07:27.640 [2024-07-23 02:02:36.240233] [ DPDK EAL parameters: hello_sock --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63634 ] 00:07:27.640 [2024-07-23 02:02:36.411894] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.898 [2024-07-23 02:02:36.606039] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:27.898 [2024-07-23 02:02:36.606151] hello_sock.c: 514:hello_start: *NOTICE*: Successfully started the application 00:07:27.898 [2024-07-23 02:02:36.606193] hello_sock.c: 304:hello_sock_connect: *NOTICE*: Connecting to the server on 10.0.0.1:3260 with sock_impl(ssl) 00:07:27.898 [2024-07-23 02:02:36.607542] hello_sock.c: 407:hello_sock_accept_poll: *NOTICE*: Accepting a new connection from (10.0.0.1, 36040) to (10.0.0.1, 3260) 00:07:27.898 [2024-07-23 02:02:36.611227] posix.c: 755:posix_sock_create_ssl_context: *ERROR*: Incorrect TLS version provided: 7 00:07:27.898 [2024-07-23 02:02:36.611319] posix.c:1033:posix_sock_create: *ERROR*: posix_sock_create_ssl_context() failed, errno = 2 00:07:27.899 [2024-07-23 02:02:36.611374] hello_sock.c: 309:hello_sock_connect: *ERROR*: connect error(2): No such file or directory 00:07:27.899 [2024-07-23 02:02:36.611390] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:27.899 [2024-07-23 02:02:36.611457] hello_sock.c: 591:main: *ERROR*: ERROR starting application 00:07:27.899 [2024-07-23 02:02:36.611472] hello_sock.c: 594:main: *NOTICE*: Exiting from application 00:07:27.899 [2024-07-23 02:02:36.611550] hello_sock.c: 377:hello_sock_cb: *NOTICE*: Connection closed 00:07:28.465 [2024-07-23 02:02:36.991130] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:07:28.465 [2024-07-23 02:02:36.991292] [ DPDK EAL parameters: hello_sock --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63645 ] 00:07:28.465 [2024-07-23 02:02:37.160632] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:28.724 [2024-07-23 02:02:37.370024] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:28.724 [2024-07-23 02:02:37.370137] hello_sock.c: 514:hello_start: *NOTICE*: Successfully started the application 00:07:28.724 [2024-07-23 02:02:37.370181] hello_sock.c: 304:hello_sock_connect: *NOTICE*: Connecting to the server on 10.0.0.1:3260 with sock_impl(ssl) 00:07:28.724 [2024-07-23 02:02:37.372313] hello_sock.c: 407:hello_sock_accept_poll: *NOTICE*: Accepting a new connection from (10.0.0.1, 36050) to (10.0.0.1, 3260) 00:07:28.724 [2024-07-23 02:02:37.375376] hello_sock.c: 319:hello_sock_connect: *NOTICE*: Connection accepted from (10.0.0.1, 3260) to (10.0.0.1, 36050) 00:07:28.724 [2024-07-23 02:02:37.378175] hello_sock.c: 251:hello_sock_writev_poll: *NOTICE*: Closing connection... 00:07:29.659 [2024-07-23 02:02:38.378224] hello_sock.c: 162:hello_sock_close_timeout_poll: *NOTICE*: Connection closed 00:07:29.659 [2024-07-23 02:02:38.378320] hello_sock.c: 377:hello_sock_cb: *NOTICE*: Connection closed 00:07:29.659 [2024-07-23 02:02:38.378424] hello_sock.c: 594:main: *NOTICE*: Exiting from application 00:07:30.226 SSL_connect:before SSL initialization 00:07:30.226 SSL_connect:SSLv3/TLS write client hello 00:07:30.226 [2024-07-23 02:02:38.786043] hello_sock.c: 407:hello_sock_accept_poll: *NOTICE*: Accepting a new connection from (10.0.0.2, 48374) to (10.0.0.1, 3260) 00:07:30.226 SSL_connect:SSLv3/TLS write client hello 00:07:30.226 SSL_connect:SSLv3/TLS read server hello 00:07:30.226 Can't use SSL_get_servername 00:07:30.226 SSL_connect:TLSv1.3 read encrypted extensions 00:07:30.226 SSL_connect:SSLv3/TLS read finished 00:07:30.226 SSL_connect:SSLv3/TLS write change cipher spec 00:07:30.226 SSL_connect:SSLv3/TLS write finished 00:07:30.226 SSL_connect:SSL negotiation finished successfully 00:07:30.226 SSL_connect:SSL negotiation finished successfully 00:07:30.226 SSL_connect:SSLv3/TLS read server session ticket 00:07:32.132 DONE 00:07:32.132 SSL3 alert write:warning:close notify 00:07:32.132 [2024-07-23 02:02:40.729911] hello_sock.c: 377:hello_sock_cb: *NOTICE*: Connection closed 00:07:32.132 [2024-07-23 02:02:40.786957] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:07:32.132 [2024-07-23 02:02:40.787114] [ DPDK EAL parameters: hello_sock --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63695 ] 00:07:32.391 [2024-07-23 02:02:40.955055] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:32.391 [2024-07-23 02:02:41.144710] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:32.391 [2024-07-23 02:02:41.145074] hello_sock.c: 514:hello_start: *NOTICE*: Successfully started the application 00:07:32.391 [2024-07-23 02:02:41.145245] hello_sock.c: 304:hello_sock_connect: *NOTICE*: Connecting to the server on 10.0.0.1:3260 with sock_impl(ssl) 00:07:32.391 [2024-07-23 02:02:41.146644] hello_sock.c: 407:hello_sock_accept_poll: *NOTICE*: Accepting a new connection from (10.0.0.1, 43044) to (10.0.0.1, 3260) 00:07:32.391 [2024-07-23 02:02:41.150546] hello_sock.c: 319:hello_sock_connect: *NOTICE*: Connection accepted from (10.0.0.1, 3260) to (10.0.0.1, 43044) 00:07:32.391 [2024-07-23 02:02:41.151959] hello_sock.c: 240:hello_sock_writev_poll: *ERROR*: Write to socket failed. Closing connection... 00:07:32.391 [2024-07-23 02:02:41.151974] hello_sock.c: 377:hello_sock_cb: *NOTICE*: Connection closed 00:07:32.391 [2024-07-23 02:02:41.152040] hello_sock.c: 208:hello_sock_recv_poll: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:07:33.767 [2024-07-23 02:02:42.152021] hello_sock.c: 162:hello_sock_close_timeout_poll: *NOTICE*: Connection closed 00:07:33.767 [2024-07-23 02:02:42.152174] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:33.767 [2024-07-23 02:02:42.152230] hello_sock.c: 591:main: *ERROR*: ERROR starting application 00:07:33.767 [2024-07-23 02:02:42.152249] hello_sock.c: 594:main: *NOTICE*: Exiting from application 00:07:33.767 [2024-07-23 02:02:42.512895] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:07:33.767 [2024-07-23 02:02:42.513058] [ DPDK EAL parameters: hello_sock --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63716 ] 00:07:34.025 [2024-07-23 02:02:42.685822] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:34.284 [2024-07-23 02:02:42.867622] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:34.284 [2024-07-23 02:02:42.868518] hello_sock.c: 514:hello_start: *NOTICE*: Successfully started the application 00:07:34.284 [2024-07-23 02:02:42.868568] hello_sock.c: 304:hello_sock_connect: *NOTICE*: Connecting to the server on 10.0.0.1:3260 with sock_impl(ssl) 00:07:34.284 [2024-07-23 02:02:42.870265] hello_sock.c: 407:hello_sock_accept_poll: *NOTICE*: Accepting a new connection from (10.0.0.1, 43056) to (10.0.0.1, 3260) 00:07:34.284 [2024-07-23 02:02:42.873935] hello_sock.c: 319:hello_sock_connect: *NOTICE*: Connection accepted from (10.0.0.1, 3260) to (10.0.0.1, 43056) 00:07:34.284 [2024-07-23 02:02:42.874947] posix.c: 586:posix_sock_psk_find_session_server_cb: *ERROR*: Unknown Client's PSK ID 00:07:34.284 [2024-07-23 02:02:42.875039] hello_sock.c: 377:hello_sock_cb: *NOTICE*: Connection closed 00:07:34.284 [2024-07-23 02:02:42.875064] hello_sock.c: 208:hello_sock_recv_poll: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:07:35.241 [2024-07-23 02:02:43.875053] hello_sock.c: 162:hello_sock_close_timeout_poll: *NOTICE*: Connection closed 00:07:35.241 [2024-07-23 02:02:43.875218] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:35.241 [2024-07-23 02:02:43.875275] hello_sock.c: 591:main: *ERROR*: ERROR starting application 00:07:35.241 [2024-07-23 02:02:43.875289] hello_sock.c: 594:main: *NOTICE*: Exiting from application 00:07:35.500 killing process with pid 63572 00:07:36.883 [2024-07-23 02:02:45.222597] hello_sock.c: 162:hello_sock_close_timeout_poll: *NOTICE*: Connection closed 00:07:36.883 [2024-07-23 02:02:45.222711] hello_sock.c: 594:main: *NOTICE*: Exiting from application 00:07:36.883 Waiting for process to start up and listen on address 10.0.0.1:3260... 00:07:37.142 [2024-07-23 02:02:45.665451] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:07:37.142 [2024-07-23 02:02:45.665629] [ DPDK EAL parameters: hello_sock --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63774 ] 00:07:37.142 [2024-07-23 02:02:45.838715] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:37.400 [2024-07-23 02:02:46.043704] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.401 [2024-07-23 02:02:46.043817] hello_sock.c: 514:hello_start: *NOTICE*: Successfully started the application 00:07:37.401 [2024-07-23 02:02:46.043945] hello_sock.c: 472:hello_sock_listen: *NOTICE*: Listening connection on 10.0.0.1:3260 with sock_impl(posix) 00:07:37.401 [2024-07-23 02:02:46.134114] hello_sock.c: 407:hello_sock_accept_poll: *NOTICE*: Accepting a new connection from (10.0.0.2, 48378) to (10.0.0.1, 3260) 00:07:37.401 [2024-07-23 02:02:46.134295] hello_sock.c: 377:hello_sock_cb: *NOTICE*: Connection closed 00:07:37.401 killing process with pid 63774 00:07:38.776 [2024-07-23 02:02:47.159109] hello_sock.c: 162:hello_sock_close_timeout_poll: *NOTICE*: Connection closed 00:07:38.776 [2024-07-23 02:02:47.159221] hello_sock.c: 594:main: *NOTICE*: Exiting from application 00:07:38.776 ************************************ 00:07:38.776 END TEST iscsi_tgt_sock 00:07:38.776 ************************************ 00:07:38.776 00:07:38.776 real 0m21.407s 00:07:38.776 user 0m26.596s 00:07:38.776 sys 0m2.707s 00:07:38.776 02:02:47 iscsi_tgt.iscsi_tgt_sock -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:38.776 02:02:47 iscsi_tgt.iscsi_tgt_sock -- common/autotest_common.sh@10 -- # set +x 00:07:39.035 02:02:47 iscsi_tgt -- common/autotest_common.sh@1142 -- # return 0 00:07:39.035 02:02:47 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@26 -- # [[ -d /usr/local/calsoft ]] 00:07:39.035 02:02:47 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@27 -- # run_test iscsi_tgt_calsoft /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/calsoft/calsoft.sh 00:07:39.035 02:02:47 iscsi_tgt -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:39.035 02:02:47 iscsi_tgt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:39.035 02:02:47 iscsi_tgt -- common/autotest_common.sh@10 -- # set +x 00:07:39.035 ************************************ 00:07:39.035 START TEST iscsi_tgt_calsoft 00:07:39.035 ************************************ 00:07:39.035 02:02:47 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/calsoft/calsoft.sh 00:07:39.035 * Looking for test storage... 00:07:39.035 * Found test storage at /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/calsoft 00:07:39.035 02:02:47 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:07:39.035 02:02:47 iscsi_tgt.iscsi_tgt_calsoft -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:07:39.035 02:02:47 iscsi_tgt.iscsi_tgt_calsoft -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:07:39.035 02:02:47 iscsi_tgt.iscsi_tgt_calsoft -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:07:39.035 02:02:47 iscsi_tgt.iscsi_tgt_calsoft -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:07:39.035 02:02:47 iscsi_tgt.iscsi_tgt_calsoft -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:07:39.035 02:02:47 iscsi_tgt.iscsi_tgt_calsoft -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:07:39.035 02:02:47 iscsi_tgt.iscsi_tgt_calsoft -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:07:39.035 02:02:47 iscsi_tgt.iscsi_tgt_calsoft -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:07:39.035 02:02:47 iscsi_tgt.iscsi_tgt_calsoft -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:07:39.035 02:02:47 iscsi_tgt.iscsi_tgt_calsoft -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:07:39.035 02:02:47 iscsi_tgt.iscsi_tgt_calsoft -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:07:39.035 02:02:47 iscsi_tgt.iscsi_tgt_calsoft -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:07:39.035 02:02:47 iscsi_tgt.iscsi_tgt_calsoft -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:07:39.035 02:02:47 iscsi_tgt.iscsi_tgt_calsoft -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:07:39.035 02:02:47 iscsi_tgt.iscsi_tgt_calsoft -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:07:39.035 02:02:47 iscsi_tgt.iscsi_tgt_calsoft -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:07:39.035 02:02:47 iscsi_tgt.iscsi_tgt_calsoft -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:07:39.035 02:02:47 iscsi_tgt.iscsi_tgt_calsoft -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:07:39.035 02:02:47 iscsi_tgt.iscsi_tgt_calsoft -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:07:39.036 02:02:47 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@15 -- # MALLOC_BDEV_SIZE=64 00:07:39.036 02:02:47 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@16 -- # MALLOC_BLOCK_SIZE=512 00:07:39.036 02:02:47 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@18 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:39.036 02:02:47 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@19 -- # calsoft_py=/home/vagrant/spdk_repo/spdk/test/iscsi_tgt/calsoft/calsoft.py 00:07:39.036 02:02:47 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@22 -- # mkdir -p /usr/local/etc 00:07:39.036 02:02:47 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@23 -- # cp /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/calsoft/its.conf /usr/local/etc/ 00:07:39.036 02:02:47 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@26 -- # echo IP=10.0.0.1 00:07:39.036 02:02:47 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@28 -- # timing_enter start_iscsi_tgt 00:07:39.036 02:02:47 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:39.036 02:02:47 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@10 -- # set +x 00:07:39.036 02:02:47 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@30 -- # iscsitestinit 00:07:39.036 02:02:47 iscsi_tgt.iscsi_tgt_calsoft -- iscsi_tgt/common.sh@109 -- # '[' '' == iso ']' 00:07:39.036 Process pid: 63867 00:07:39.036 02:02:47 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@33 -- # pid=63867 00:07:39.036 02:02:47 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@34 -- # echo 'Process pid: 63867' 00:07:39.036 02:02:47 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@36 -- # trap 'killprocess $pid; delete_tmp_conf_files; iscsitestfini; exit 1 ' SIGINT SIGTERM EXIT 00:07:39.036 02:02:47 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@32 -- # ip netns exec spdk_iscsi_ns /home/vagrant/spdk_repo/spdk/build/bin/iscsi_tgt -m 0x1 --wait-for-rpc 00:07:39.036 02:02:47 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@38 -- # waitforlisten 63867 00:07:39.036 02:02:47 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@829 -- # '[' -z 63867 ']' 00:07:39.036 02:02:47 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:39.036 02:02:47 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:39.036 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:39.036 02:02:47 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:39.036 02:02:47 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:39.036 02:02:47 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@10 -- # set +x 00:07:39.295 [2024-07-23 02:02:47.854944] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:07:39.295 [2024-07-23 02:02:47.855144] [ DPDK EAL parameters: iscsi --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63867 ] 00:07:39.295 [2024-07-23 02:02:48.028670] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:39.553 [2024-07-23 02:02:48.245911] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:40.121 02:02:48 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:40.121 02:02:48 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@862 -- # return 0 00:07:40.121 02:02:48 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:07:40.379 02:02:48 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:07:41.314 02:02:49 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@41 -- # echo 'iscsi_tgt is listening. Running tests...' 00:07:41.314 iscsi_tgt is listening. Running tests... 00:07:41.314 02:02:49 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@43 -- # timing_exit start_iscsi_tgt 00:07:41.314 02:02:49 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:41.314 02:02:49 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@10 -- # set +x 00:07:41.314 02:02:49 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_auth_group 1 -c 'user:root secret:tester' 00:07:41.573 02:02:50 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_set_discovery_auth -g 1 00:07:41.832 02:02:50 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_portal_group 1 10.0.0.1:3260 00:07:42.090 02:02:50 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_initiator_group 2 ANY 10.0.0.2/32 00:07:42.349 02:02:50 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create -b MyBdev 64 512 00:07:42.607 MyBdev 00:07:42.608 02:02:51 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target3 Target3_alias MyBdev:0 1:2 64 -g 1 00:07:42.608 02:02:51 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@55 -- # sleep 1 00:07:43.984 02:02:52 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@57 -- # '[' '' ']' 00:07:43.984 02:02:52 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@61 -- # /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/calsoft/calsoft.py /home/vagrant/spdk_repo/spdk/../output 00:07:43.984 [2024-07-23 02:02:52.498672] iscsi.c:4522:iscsi_pdu_hdr_handle: *ERROR*: before Full Feature 00:07:43.984 PDU 00:07:43.984 00000000 00 81 00 00 00 00 00 81 00 02 3d 03 00 00 00 00 ..........=..... 00:07:43.984 00000010 00 00 00 05 00 00 00 00 00 00 00 00 00 00 00 00 ................ 00:07:43.984 00000020 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................ 00:07:43.984 [2024-07-23 02:02:52.499149] iscsi.c:4840:iscsi_read_pdu: *ERROR*: Critical error is detected. Close the connection 00:07:43.984 [2024-07-23 02:02:52.518806] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=4 00:07:43.984 [2024-07-23 02:02:52.539031] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(341) ignore (ExpCmdSN=8, MaxCmdSN=71) 00:07:43.984 [2024-07-23 02:02:52.539162] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(8) ignore (ExpCmdSN=9, MaxCmdSN=71) 00:07:43.984 [2024-07-23 02:02:52.539962] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=12 00:07:43.984 [2024-07-23 02:02:52.558767] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=66) 00:07:43.984 [2024-07-23 02:02:52.558898] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:07:43.984 [2024-07-23 02:02:52.609615] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(3) ignore (ExpCmdSN=4, MaxCmdSN=67) 00:07:43.984 [2024-07-23 02:02:52.609730] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:07:43.984 [2024-07-23 02:02:52.627347] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(3) ignore (ExpCmdSN=4, MaxCmdSN=67) 00:07:43.984 [2024-07-23 02:02:52.627474] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(4) ignore (ExpCmdSN=5, MaxCmdSN=67) 00:07:43.984 [2024-07-23 02:02:52.627816] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=6 00:07:43.984 [2024-07-23 02:02:52.647705] iscsi.c:3446:iscsi_pdu_hdr_op_scsi: *ERROR*: Reject scsi cmd with EDTL > 0 but (R | W) == 0 00:07:43.984 [2024-07-23 02:02:52.647857] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=4 00:07:43.984 [2024-07-23 02:02:52.681419] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=1 00:07:43.984 [2024-07-23 02:02:52.700188] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=67) 00:07:43.984 [2024-07-23 02:02:52.700230] iscsi.c:3961:iscsi_handle_recovery_datain: *ERROR*: Initiator requests BegRun: 0x00000000, RunLength:0x00001000 greater than maximum DataSN: 0x00000004. 00:07:43.984 [2024-07-23 02:02:52.700253] iscsi.c:4566:iscsi_pdu_hdr_handle: *ERROR*: processing PDU header (opcode=10) failed on iqn.2016-06.io.spdk:Target3,t,0x0001(iqn.1994-05.com.redhat:b3283535dc3b,i,0x00230d030000) 00:07:43.984 [2024-07-23 02:02:52.700266] iscsi.c:4840:iscsi_read_pdu: *ERROR*: Critical error is detected. Close the connection 00:07:43.984 [2024-07-23 02:02:52.754890] param.c: 247:iscsi_parse_param: *ERROR*: Overflow Val 276 00:07:43.984 [2024-07-23 02:02:52.754929] iscsi.c:1303:iscsi_op_login_store_incoming_params: *ERROR*: iscsi_parse_params() failed 00:07:44.243 [2024-07-23 02:02:52.773212] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=4 00:07:44.243 [2024-07-23 02:02:52.808418] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=1 00:07:44.243 [2024-07-23 02:02:52.844662] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=4 00:07:44.243 [2024-07-23 02:02:52.899144] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=67) 00:07:44.243 [2024-07-23 02:02:52.899252] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:07:44.243 [2024-07-23 02:02:52.916523] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=1 00:07:44.243 [2024-07-23 02:02:52.988764] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=6 00:07:44.501 [2024-07-23 02:02:53.021791] param.c: 859:iscsi_negotiate_param_init: *ERROR*: unknown key TaskReporting 00:07:44.501 [2024-07-23 02:02:53.041364] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=67) 00:07:44.501 [2024-07-23 02:02:53.041527] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:07:44.501 [2024-07-23 02:02:53.092309] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=8, MaxCmdSN=71) 00:07:44.501 [2024-07-23 02:02:53.092517] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=9 00:07:44.501 [2024-07-23 02:02:53.112678] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=1 00:07:44.501 [2024-07-23 02:02:53.131992] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=1 00:07:44.502 [2024-07-23 02:02:53.150911] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=67) 00:07:44.502 [2024-07-23 02:02:53.151053] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:07:44.502 [2024-07-23 02:02:53.168228] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=66) 00:07:44.502 [2024-07-23 02:02:53.168437] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:07:44.502 [2024-07-23 02:02:53.227644] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(3) ignore (ExpCmdSN=4, MaxCmdSN=66) 00:07:47.031 [2024-07-23 02:02:55.187277] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:07:47.031 [2024-07-23 02:02:55.221125] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=2 00:07:47.031 [2024-07-23 02:02:55.235747] param.c: 859:iscsi_negotiate_param_init: *ERROR*: unknown key ImmediateDataa 00:07:47.031 [2024-07-23 02:02:55.254060] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=4 00:07:47.031 [2024-07-23 02:02:55.307388] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=66) 00:07:47.031 [2024-07-23 02:02:55.307523] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:07:47.031 [2024-07-23 02:02:55.361029] iscsi.c:4234:iscsi_pdu_hdr_op_data: *ERROR*: The r2t task tag is 2745410467, and the dataout task tag is 2728567458 00:07:47.031 [2024-07-23 02:02:55.361218] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=0 00:07:47.031 [2024-07-23 02:02:55.361435] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=0 00:07:47.031 [2024-07-23 02:02:55.361563] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=6 00:07:47.031 [2024-07-23 02:02:55.378787] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=4 00:07:47.031 [2024-07-23 02:02:55.411630] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=67) 00:07:47.031 [2024-07-23 02:02:55.411757] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:07:47.031 [2024-07-23 02:02:55.448729] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=66) 00:07:47.031 [2024-07-23 02:02:55.448850] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:07:47.031 [2024-07-23 02:02:55.468149] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(2) ignore (ExpCmdSN=3, MaxCmdSN=66) 00:07:47.031 [2024-07-23 02:02:55.468262] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(3) ignore (ExpCmdSN=4, MaxCmdSN=66) 00:07:47.031 [2024-07-23 02:02:55.468349] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:07:47.031 [2024-07-23 02:02:55.533232] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=67) 00:07:47.031 [2024-07-23 02:02:55.533362] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:07:47.031 [2024-07-23 02:02:55.566267] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=66) 00:07:47.031 [2024-07-23 02:02:55.566393] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:07:47.031 [2024-07-23 02:02:55.603577] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=ffffffff 00:07:47.031 [2024-07-23 02:02:55.655218] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=1 00:07:47.031 [2024-07-23 02:02:55.674171] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=67) 00:07:47.031 [2024-07-23 02:02:55.674298] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:07:47.031 [2024-07-23 02:02:55.692768] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=4 00:07:47.031 [2024-07-23 02:02:55.710505] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=67) 00:07:47.031 [2024-07-23 02:02:55.710703] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:07:47.031 [2024-07-23 02:02:55.729757] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=1 00:07:47.031 [2024-07-23 02:02:55.748562] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=4 00:07:47.031 [2024-07-23 02:02:55.767212] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=66) 00:07:47.031 [2024-07-23 02:02:55.767355] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:07:47.290 [2024-07-23 02:02:55.833215] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=1 00:07:47.290 [2024-07-23 02:02:55.920814] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=67) 00:07:47.290 [2024-07-23 02:02:55.920941] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:07:47.290 [2024-07-23 02:02:55.937170] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=1 00:07:47.290 [2024-07-23 02:02:55.955043] iscsi.c:4229:iscsi_pdu_hdr_op_data: *ERROR*: the dataout pdu data length is larger than the value sent by R2T PDU 00:07:47.290 [2024-07-23 02:02:55.955086] iscsi.c:4566:iscsi_pdu_hdr_handle: *ERROR*: processing PDU header (opcode=5) failed on iqn.2016-06.io.spdk:Target3,t,0x0001(iqn.1994-05.com.redhat:b3283535dc3b,i,0x00230d030000) 00:07:47.290 [2024-07-23 02:02:55.955102] iscsi.c:4840:iscsi_read_pdu: *ERROR*: Critical error is detected. Close the connection 00:07:47.290 [2024-07-23 02:02:55.973443] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=66) 00:07:47.290 [2024-07-23 02:02:55.973577] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:07:47.290 [2024-07-23 02:02:55.992271] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=6 00:07:47.290 [2024-07-23 02:02:56.010069] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=66) 00:07:47.290 [2024-07-23 02:02:56.010187] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:07:47.290 [2024-07-23 02:02:56.044580] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=67) 00:07:47.290 [2024-07-23 02:02:56.044771] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:07:47.658 [2024-07-23 02:02:56.076172] param.c: 859:iscsi_negotiate_param_init: *ERROR*: unknown key TaskReporting 00:07:47.658 [2024-07-23 02:02:56.094535] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=67) 00:07:47.658 [2024-07-23 02:02:56.094677] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:07:47.658 [2024-07-23 02:02:56.113855] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=1 00:07:47.658 [2024-07-23 02:02:56.214232] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=67) 00:07:47.658 [2024-07-23 02:02:56.214356] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:07:47.658 [2024-07-23 02:02:56.233364] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=67) 00:07:47.658 [2024-07-23 02:02:56.233486] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:07:47.658 [2024-07-23 02:02:56.266476] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(3) error ExpCmdSN=4 00:07:47.658 [2024-07-23 02:02:56.266775] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=4 00:07:47.658 [2024-07-23 02:02:56.319319] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=12 00:07:47.658 [2024-07-23 02:02:56.352783] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=1 00:07:47.658 [2024-07-23 02:02:56.403667] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=1 00:07:47.923 [2024-07-23 02:02:56.437284] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=4 00:07:47.923 [2024-07-23 02:02:56.487546] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=66) 00:07:47.923 [2024-07-23 02:02:56.487885] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:07:47.923 [2024-07-23 02:02:56.523097] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=12 00:07:47.923 [2024-07-23 02:02:56.621579] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:07:47.923 [2024-07-23 02:02:56.651913] iscsi.c:4522:iscsi_pdu_hdr_handle: *ERROR*: before Full Feature 00:07:47.923 PDU 00:07:47.923 00000000 01 81 00 00 00 00 00 81 00 02 3d 03 00 00 00 00 ..........=..... 00:07:47.923 00000010 00 00 00 05 00 00 00 00 00 00 00 00 00 00 00 00 ................ 00:07:47.923 00000020 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................ 00:07:47.923 [2024-07-23 02:02:56.651983] iscsi.c:4840:iscsi_read_pdu: *ERROR*: Critical error is detected. Close the connection 00:07:47.923 [2024-07-23 02:02:56.670476] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=67) 00:07:47.923 [2024-07-23 02:02:56.670620] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:07:48.181 [2024-07-23 02:02:56.701698] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=4 00:07:48.181 [2024-07-23 02:02:56.739573] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(3) ignore (ExpCmdSN=4, MaxCmdSN=66) 00:07:48.181 [2024-07-23 02:02:56.739782] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:07:48.181 [2024-07-23 02:02:56.758786] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=66) 00:07:48.181 [2024-07-23 02:02:56.759029] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:07:48.181 [2024-07-23 02:02:56.778501] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=1 00:07:48.181 [2024-07-23 02:02:56.798003] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=4 00:07:48.181 [2024-07-23 02:02:56.817112] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=67) 00:07:48.181 [2024-07-23 02:02:56.817232] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:07:48.181 [2024-07-23 02:02:56.835630] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=4 00:07:48.181 [2024-07-23 02:02:56.870406] iscsi.c:1263:iscsi_op_login_rsp_init: *ERROR*: unsupported version min 4/max 1, expecting 0 00:07:48.181 [2024-07-23 02:02:56.890269] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=66) 00:07:48.181 [2024-07-23 02:02:56.890384] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:07:48.181 [2024-07-23 02:02:56.909688] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(3) ignore (ExpCmdSN=4, MaxCmdSN=67) 00:07:48.182 [2024-07-23 02:02:56.909809] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:07:48.182 [2024-07-23 02:02:56.928825] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=1 00:07:48.182 [2024-07-23 02:02:56.949366] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=67) 00:07:48.182 [2024-07-23 02:02:56.949487] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:07:48.440 [2024-07-23 02:02:56.968764] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=66) 00:07:48.440 [2024-07-23 02:02:56.968880] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:07:48.440 [2024-07-23 02:02:57.001254] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=4 00:07:48.440 [2024-07-23 02:02:57.048925] param.c: 859:iscsi_negotiate_param_init: *ERROR*: unknown key TaskReporting 00:07:48.440 [2024-07-23 02:02:57.117160] iscsi.c:1263:iscsi_op_login_rsp_init: *ERROR*: unsupported version min 4/max 1, expecting 0 00:07:48.440 [2024-07-23 02:02:57.152078] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=6 00:07:48.440 [2024-07-23 02:02:57.186626] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=67) 00:07:48.440 [2024-07-23 02:02:57.186750] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:07:49.006 [2024-07-23 02:02:57.648997] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(1) ignore (ExpCmdSN=3, MaxCmdSN=66) 00:07:49.006 [2024-07-23 02:02:57.649524] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(1) ignore (ExpCmdSN=4, MaxCmdSN=66) 00:07:49.006 [2024-07-23 02:02:57.649734] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=5, MaxCmdSN=67) 00:07:49.006 [2024-07-23 02:02:57.649903] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(3) ignore (ExpCmdSN=6, MaxCmdSN=67) 00:07:49.006 [2024-07-23 02:02:57.650794] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=9 00:07:49.006 [2024-07-23 02:02:57.692202] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=2 00:07:49.006 [2024-07-23 02:02:57.713903] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=1 00:07:49.006 [2024-07-23 02:02:57.751669] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=9 00:07:49.006 [2024-07-23 02:02:57.772651] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=66) 00:07:49.006 [2024-07-23 02:02:57.772810] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:07:49.264 [2024-07-23 02:02:57.792538] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=4, MaxCmdSN=67) 00:07:49.264 [2024-07-23 02:02:57.792772] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=5 00:07:49.264 [2024-07-23 02:02:57.813133] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=1 00:07:49.264 [2024-07-23 02:02:57.870053] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=1 00:07:49.264 [2024-07-23 02:02:57.886881] param.c: 859:iscsi_negotiate_param_init: *ERROR*: unknown key TaskReporting 00:07:50.199 [2024-07-23 02:02:58.924750] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(3) ignore (ExpCmdSN=4, MaxCmdSN=67) 00:07:51.134 [2024-07-23 02:02:59.907347] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=6, MaxCmdSN=68) 00:07:51.134 [2024-07-23 02:02:59.907712] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=7 00:07:51.393 [2024-07-23 02:02:59.924912] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(3) ignore (ExpCmdSN=5, MaxCmdSN=68) 00:07:52.327 [2024-07-23 02:03:00.925209] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(4) ignore (ExpCmdSN=6, MaxCmdSN=69) 00:07:52.327 [2024-07-23 02:03:00.925387] iscsi.c:4448:iscsi_update_cmdsn: *ERROR*: CmdSN(0) ignore (ExpCmdSN=7, MaxCmdSN=70) 00:07:52.327 [2024-07-23 02:03:00.925412] iscsi.c:4028:iscsi_handle_status_snack: *ERROR*: Unable to find StatSN: 0x00000007. For a StatusSNACK, assuming this is a proactive SNACK for an untransmitted StatSN, ignoring. 00:07:52.327 [2024-07-23 02:03:00.925451] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=8 00:08:04.525 [2024-07-23 02:03:12.971577] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(1073741824) error ExpCmdSN=64 00:08:04.525 [2024-07-23 02:03:12.992539] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(1073741824) error ExpCmdSN=64 00:08:04.525 [2024-07-23 02:03:13.012759] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(1073741824) error ExpCmdSN=64 00:08:04.525 [2024-07-23 02:03:13.012836] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=65 00:08:04.525 [2024-07-23 02:03:13.033819] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=65 00:08:04.525 [2024-07-23 02:03:13.053849] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=65 00:08:04.525 [2024-07-23 02:03:13.076936] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(1073741824) error ExpCmdSN=64 00:08:04.525 [2024-07-23 02:03:13.118851] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=65 00:08:04.525 [2024-07-23 02:03:13.120919] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=64 00:08:04.525 [2024-07-23 02:03:13.138570] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(1107296256) error ExpCmdSN=66 00:08:04.525 [2024-07-23 02:03:13.161815] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=65 00:08:04.525 [2024-07-23 02:03:13.179723] iscsi.c:4459:iscsi_update_cmdsn: *ERROR*: CmdSN(0) error ExpCmdSN=67 00:08:04.525 Skipping tc_ffp_15_2. It is known to fail. 00:08:04.525 Skipping tc_ffp_29_2. It is known to fail. 00:08:04.525 Skipping tc_ffp_29_3. It is known to fail. 00:08:04.525 Skipping tc_ffp_29_4. It is known to fail. 00:08:04.525 Skipping tc_err_1_1. It is known to fail. 00:08:04.525 Skipping tc_err_1_2. It is known to fail. 00:08:04.525 Skipping tc_err_2_8. It is known to fail. 00:08:04.525 Skipping tc_err_3_1. It is known to fail. 00:08:04.525 Skipping tc_err_3_2. It is known to fail. 00:08:04.525 Skipping tc_err_3_3. It is known to fail. 00:08:04.525 Skipping tc_err_3_4. It is known to fail. 00:08:04.525 Skipping tc_err_5_1. It is known to fail. 00:08:04.525 Skipping tc_login_3_1. It is known to fail. 00:08:04.525 Skipping tc_login_11_2. It is known to fail. 00:08:04.525 Skipping tc_login_11_4. It is known to fail. 00:08:04.525 Skipping tc_login_2_2. It is known to fail. 00:08:04.525 Skipping tc_login_29_1. It is known to fail. 00:08:04.525 02:03:13 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@62 -- # failed=0 00:08:04.525 02:03:13 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:08:04.525 02:03:13 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@67 -- # iscsicleanup 00:08:04.525 Cleaning up iSCSI connection 00:08:04.525 02:03:13 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@980 -- # echo 'Cleaning up iSCSI connection' 00:08:04.525 02:03:13 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@981 -- # iscsiadm -m node --logout 00:08:04.525 iscsiadm: No matching sessions found 00:08:04.525 02:03:13 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@981 -- # true 00:08:04.525 02:03:13 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@982 -- # iscsiadm -m node -o delete 00:08:04.525 iscsiadm: No records found 00:08:04.525 02:03:13 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@982 -- # true 00:08:04.525 02:03:13 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@983 -- # rm -rf 00:08:04.525 02:03:13 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@68 -- # killprocess 63867 00:08:04.525 02:03:13 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@948 -- # '[' -z 63867 ']' 00:08:04.525 02:03:13 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@952 -- # kill -0 63867 00:08:04.525 02:03:13 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@953 -- # uname 00:08:04.525 02:03:13 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:04.525 02:03:13 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 63867 00:08:04.525 02:03:13 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:04.525 killing process with pid 63867 00:08:04.525 02:03:13 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:04.525 02:03:13 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@966 -- # echo 'killing process with pid 63867' 00:08:04.525 02:03:13 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@967 -- # kill 63867 00:08:04.525 02:03:13 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@972 -- # wait 63867 00:08:07.061 02:03:15 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@69 -- # delete_tmp_conf_files 00:08:07.062 02:03:15 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@12 -- # rm -f /usr/local/etc/its.conf 00:08:07.062 02:03:15 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@70 -- # iscsitestfini 00:08:07.062 02:03:15 iscsi_tgt.iscsi_tgt_calsoft -- iscsi_tgt/common.sh@131 -- # '[' '' == iso ']' 00:08:07.062 02:03:15 iscsi_tgt.iscsi_tgt_calsoft -- calsoft/calsoft.sh@71 -- # exit 0 00:08:07.062 00:08:07.062 real 0m27.669s 00:08:07.062 user 0m43.463s 00:08:07.062 sys 0m2.520s 00:08:07.062 02:03:15 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:07.062 02:03:15 iscsi_tgt.iscsi_tgt_calsoft -- common/autotest_common.sh@10 -- # set +x 00:08:07.062 ************************************ 00:08:07.062 END TEST iscsi_tgt_calsoft 00:08:07.062 ************************************ 00:08:07.062 02:03:15 iscsi_tgt -- common/autotest_common.sh@1142 -- # return 0 00:08:07.062 02:03:15 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@31 -- # run_test iscsi_tgt_filesystem /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/filesystem/filesystem.sh 00:08:07.062 02:03:15 iscsi_tgt -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:07.062 02:03:15 iscsi_tgt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:07.062 02:03:15 iscsi_tgt -- common/autotest_common.sh@10 -- # set +x 00:08:07.062 ************************************ 00:08:07.062 START TEST iscsi_tgt_filesystem 00:08:07.062 ************************************ 00:08:07.062 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/filesystem/filesystem.sh 00:08:07.062 * Looking for test storage... 00:08:07.062 * Found test storage at /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/filesystem 00:08:07.062 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/setup/common.sh 00:08:07.062 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- setup/common.sh@6 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:08:07.062 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:08:07.062 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@34 -- # set -e 00:08:07.062 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:08:07.062 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:08:07.062 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:08:07.062 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@39 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:08:07.062 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@44 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:08:07.062 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:08:07.062 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:08:07.062 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:08:07.062 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:08:07.062 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:08:07.062 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:08:07.062 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:08:07.062 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:08:07.062 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=y 00:08:07.062 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:08:07.062 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:08:07.062 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:08:07.062 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:08:07.062 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:08:07.062 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:08:07.062 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:08:07.062 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:08:07.062 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:08:07.062 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:08:07.062 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:08:07.062 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:08:07.062 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:08:07.062 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:08:07.062 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:08:07.062 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:08:07.062 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:08:07.062 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:08:07.062 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:08:07.062 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:08:07.062 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:08:07.062 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:08:07.062 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:08:07.062 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:08:07.062 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:08:07.062 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:08:07.062 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:08:07.062 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:08:07.062 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:08:07.062 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:08:07.062 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:08:07.062 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:08:07.062 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:08:07.062 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:08:07.062 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:08:07.062 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:08:07.062 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:08:07.062 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:08:07.062 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:08:07.062 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:08:07.062 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:08:07.062 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:08:07.062 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:08:07.062 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=n 00:08:07.062 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:08:07.062 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:08:07.062 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:08:07.062 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:08:07.062 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:08:07.062 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:08:07.062 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:08:07.062 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:08:07.062 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:08:07.062 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:08:07.062 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:08:07.062 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:08:07.062 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:08:07.062 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:08:07.062 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:08:07.062 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:08:07.062 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:08:07.062 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:08:07.062 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:08:07.062 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:08:07.062 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:08:07.062 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:08:07.062 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:08:07.062 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:08:07.062 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:08:07.062 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:08:07.063 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:08:07.063 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:08:07.063 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:08:07.063 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:08:07.063 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:08:07.063 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@54 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:08:07.063 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:08:07.063 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:08:07.063 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:08:07.063 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:08:07.063 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:08:07.063 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:08:07.063 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:08:07.063 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:08:07.063 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:08:07.063 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:08:07.063 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:08:07.063 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:08:07.063 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:08:07.063 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:08:07.063 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:08:07.063 #define SPDK_CONFIG_H 00:08:07.063 #define SPDK_CONFIG_APPS 1 00:08:07.063 #define SPDK_CONFIG_ARCH native 00:08:07.063 #define SPDK_CONFIG_ASAN 1 00:08:07.063 #undef SPDK_CONFIG_AVAHI 00:08:07.063 #undef SPDK_CONFIG_CET 00:08:07.063 #define SPDK_CONFIG_COVERAGE 1 00:08:07.063 #define SPDK_CONFIG_CROSS_PREFIX 00:08:07.063 #undef SPDK_CONFIG_CRYPTO 00:08:07.063 #undef SPDK_CONFIG_CRYPTO_MLX5 00:08:07.063 #undef SPDK_CONFIG_CUSTOMOCF 00:08:07.063 #undef SPDK_CONFIG_DAOS 00:08:07.063 #define SPDK_CONFIG_DAOS_DIR 00:08:07.063 #define SPDK_CONFIG_DEBUG 1 00:08:07.063 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:08:07.063 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:08:07.063 #define SPDK_CONFIG_DPDK_INC_DIR 00:08:07.063 #define SPDK_CONFIG_DPDK_LIB_DIR 00:08:07.063 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:08:07.063 #undef SPDK_CONFIG_DPDK_UADK 00:08:07.063 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:08:07.063 #define SPDK_CONFIG_EXAMPLES 1 00:08:07.063 #undef SPDK_CONFIG_FC 00:08:07.063 #define SPDK_CONFIG_FC_PATH 00:08:07.063 #define SPDK_CONFIG_FIO_PLUGIN 1 00:08:07.063 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:08:07.063 #undef SPDK_CONFIG_FUSE 00:08:07.063 #undef SPDK_CONFIG_FUZZER 00:08:07.063 #define SPDK_CONFIG_FUZZER_LIB 00:08:07.063 #undef SPDK_CONFIG_GOLANG 00:08:07.063 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:08:07.063 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:08:07.063 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:08:07.063 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:08:07.063 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:08:07.063 #undef SPDK_CONFIG_HAVE_LIBBSD 00:08:07.063 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:08:07.063 #define SPDK_CONFIG_IDXD 1 00:08:07.063 #define SPDK_CONFIG_IDXD_KERNEL 1 00:08:07.063 #undef SPDK_CONFIG_IPSEC_MB 00:08:07.063 #define SPDK_CONFIG_IPSEC_MB_DIR 00:08:07.063 #define SPDK_CONFIG_ISAL 1 00:08:07.063 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:08:07.063 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:08:07.063 #define SPDK_CONFIG_LIBDIR 00:08:07.063 #undef SPDK_CONFIG_LTO 00:08:07.063 #define SPDK_CONFIG_MAX_LCORES 128 00:08:07.063 #define SPDK_CONFIG_NVME_CUSE 1 00:08:07.063 #undef SPDK_CONFIG_OCF 00:08:07.063 #define SPDK_CONFIG_OCF_PATH 00:08:07.063 #define SPDK_CONFIG_OPENSSL_PATH 00:08:07.063 #undef SPDK_CONFIG_PGO_CAPTURE 00:08:07.063 #define SPDK_CONFIG_PGO_DIR 00:08:07.063 #undef SPDK_CONFIG_PGO_USE 00:08:07.063 #define SPDK_CONFIG_PREFIX /usr/local 00:08:07.063 #undef SPDK_CONFIG_RAID5F 00:08:07.063 #define SPDK_CONFIG_RBD 1 00:08:07.063 #define SPDK_CONFIG_RDMA 1 00:08:07.063 #define SPDK_CONFIG_RDMA_PROV verbs 00:08:07.063 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:08:07.063 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:08:07.063 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:08:07.063 #define SPDK_CONFIG_SHARED 1 00:08:07.063 #undef SPDK_CONFIG_SMA 00:08:07.063 #define SPDK_CONFIG_TESTS 1 00:08:07.063 #undef SPDK_CONFIG_TSAN 00:08:07.063 #define SPDK_CONFIG_UBLK 1 00:08:07.063 #define SPDK_CONFIG_UBSAN 1 00:08:07.063 #undef SPDK_CONFIG_UNIT_TESTS 00:08:07.063 #undef SPDK_CONFIG_URING 00:08:07.063 #define SPDK_CONFIG_URING_PATH 00:08:07.063 #undef SPDK_CONFIG_URING_ZNS 00:08:07.063 #undef SPDK_CONFIG_USDT 00:08:07.063 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:08:07.063 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:08:07.063 #undef SPDK_CONFIG_VFIO_USER 00:08:07.063 #define SPDK_CONFIG_VFIO_USER_DIR 00:08:07.063 #define SPDK_CONFIG_VHOST 1 00:08:07.063 #define SPDK_CONFIG_VIRTIO 1 00:08:07.063 #undef SPDK_CONFIG_VTUNE 00:08:07.063 #define SPDK_CONFIG_VTUNE_DIR 00:08:07.063 #define SPDK_CONFIG_WERROR 1 00:08:07.063 #define SPDK_CONFIG_WPDK_DIR 00:08:07.063 #undef SPDK_CONFIG_XNVME 00:08:07.063 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:08:07.063 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:08:07.063 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@55 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:07.063 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:07.063 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:07.063 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:07.063 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:07.063 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:07.063 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:07.063 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- paths/export.sh@5 -- # export PATH 00:08:07.063 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:07.063 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@56 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:08:07.063 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:08:07.063 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:08:07.063 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:08:07.063 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:08:07.063 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:08:07.063 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:08:07.063 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:08:07.063 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power 00:08:07.063 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- pm/common@68 -- # uname -s 00:08:07.063 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- pm/common@68 -- # PM_OS=Linux 00:08:07.064 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:08:07.064 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:08:07.064 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:08:07.064 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:08:07.064 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:08:07.064 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:08:07.064 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- pm/common@76 -- # SUDO[0]= 00:08:07.064 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:08:07.064 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:08:07.064 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:08:07.064 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:08:07.064 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- pm/common@81 -- # [[ QEMU != QEMU ]] 00:08:07.064 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- pm/common@88 -- # [[ ! -d /home/vagrant/spdk_repo/spdk/../output/power ]] 00:08:07.064 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@58 -- # : 1 00:08:07.064 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:08:07.064 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@62 -- # : 0 00:08:07.064 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:08:07.064 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@64 -- # : 0 00:08:07.064 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:08:07.064 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@66 -- # : 1 00:08:07.064 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:08:07.064 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@68 -- # : 0 00:08:07.064 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:08:07.064 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@70 -- # : 00:08:07.064 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:08:07.064 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@72 -- # : 0 00:08:07.064 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:08:07.064 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@74 -- # : 0 00:08:07.064 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:08:07.064 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@76 -- # : 1 00:08:07.064 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:08:07.064 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@78 -- # : 1 00:08:07.064 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:08:07.064 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@80 -- # : 0 00:08:07.064 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:08:07.064 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@82 -- # : 0 00:08:07.064 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:08:07.064 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@84 -- # : 0 00:08:07.064 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:08:07.064 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@86 -- # : 0 00:08:07.064 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:08:07.064 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@88 -- # : 0 00:08:07.064 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:08:07.064 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@90 -- # : 0 00:08:07.064 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:08:07.064 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@92 -- # : 0 00:08:07.064 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:08:07.064 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@94 -- # : 0 00:08:07.064 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:08:07.064 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@96 -- # : 0 00:08:07.064 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:08:07.064 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@98 -- # : 0 00:08:07.064 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:08:07.064 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@100 -- # : 0 00:08:07.064 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:08:07.064 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@102 -- # : rdma 00:08:07.064 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:08:07.064 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@104 -- # : 1 00:08:07.064 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:08:07.064 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@106 -- # : 0 00:08:07.064 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:08:07.064 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@108 -- # : 0 00:08:07.064 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:08:07.064 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@110 -- # : 0 00:08:07.064 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:08:07.064 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@112 -- # : 0 00:08:07.064 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:08:07.064 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@114 -- # : 0 00:08:07.064 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:08:07.064 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@116 -- # : 0 00:08:07.064 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:08:07.064 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@118 -- # : 0 00:08:07.064 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:08:07.064 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@120 -- # : 1 00:08:07.064 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:08:07.064 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@122 -- # : 1 00:08:07.064 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:08:07.064 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@124 -- # : 00:08:07.064 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:08:07.064 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@126 -- # : 0 00:08:07.064 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:08:07.064 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@128 -- # : 0 00:08:07.064 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:08:07.064 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@130 -- # : 0 00:08:07.064 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:08:07.064 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@132 -- # : 0 00:08:07.064 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:08:07.064 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@134 -- # : 0 00:08:07.064 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:08:07.064 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@136 -- # : 0 00:08:07.064 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:08:07.064 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@138 -- # : 00:08:07.064 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:08:07.064 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@140 -- # : true 00:08:07.064 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:08:07.064 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@142 -- # : 0 00:08:07.064 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:08:07.064 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@144 -- # : 0 00:08:07.064 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:08:07.064 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@146 -- # : 0 00:08:07.064 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:08:07.064 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@148 -- # : 0 00:08:07.064 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:08:07.064 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@150 -- # : 0 00:08:07.064 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:08:07.064 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@152 -- # : 0 00:08:07.064 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:08:07.064 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@154 -- # : 00:08:07.064 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:08:07.064 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@156 -- # : 0 00:08:07.064 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:08:07.064 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@158 -- # : 0 00:08:07.064 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:08:07.065 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@160 -- # : 0 00:08:07.065 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:08:07.065 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@162 -- # : 0 00:08:07.065 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL_DSA 00:08:07.065 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@164 -- # : 0 00:08:07.065 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_IAA 00:08:07.065 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@167 -- # : 00:08:07.065 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@168 -- # export SPDK_TEST_FUZZER_TARGET 00:08:07.065 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@169 -- # : 0 00:08:07.065 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_NVMF_MDNS 00:08:07.065 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@171 -- # : 0 00:08:07.065 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@172 -- # export SPDK_JSONRPC_GO_CLIENT 00:08:07.065 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@175 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:08:07.065 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@175 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:08:07.065 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@176 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:08:07.065 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@176 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:08:07.065 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@177 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:08:07.065 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@177 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:08:07.065 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@178 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:08:07.065 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@178 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:08:07.065 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@181 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:08:07.065 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@181 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:08:07.065 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@185 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:08:07.065 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@185 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:08:07.065 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@189 -- # export PYTHONDONTWRITEBYTECODE=1 00:08:07.065 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@189 -- # PYTHONDONTWRITEBYTECODE=1 00:08:07.065 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@193 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:08:07.065 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@193 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:08:07.065 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@194 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:08:07.065 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@194 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:08:07.065 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@198 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:08:07.065 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@199 -- # rm -rf /var/tmp/asan_suppression_file 00:08:07.065 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@200 -- # cat 00:08:07.065 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@236 -- # echo leak:libfuse3.so 00:08:07.065 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@238 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:08:07.065 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@238 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:08:07.065 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@240 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:08:07.065 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@240 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:08:07.065 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@242 -- # '[' -z /var/spdk/dependencies ']' 00:08:07.065 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@245 -- # export DEPENDENCY_DIR 00:08:07.065 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@249 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:08:07.065 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@249 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:08:07.065 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@250 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:08:07.065 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@250 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:08:07.065 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@253 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:08:07.065 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@253 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:08:07.065 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@254 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:08:07.065 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@254 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:08:07.065 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@256 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:08:07.065 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@256 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:08:07.065 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@259 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:08:07.065 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@259 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:08:07.065 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@262 -- # '[' 0 -eq 0 ']' 00:08:07.065 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@263 -- # export valgrind= 00:08:07.065 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@263 -- # valgrind= 00:08:07.065 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@269 -- # uname -s 00:08:07.065 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@269 -- # '[' Linux = Linux ']' 00:08:07.065 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@270 -- # HUGEMEM=4096 00:08:07.065 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@271 -- # export CLEAR_HUGE=yes 00:08:07.065 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@271 -- # CLEAR_HUGE=yes 00:08:07.065 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:08:07.065 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:08:07.065 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@279 -- # MAKE=make 00:08:07.065 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@280 -- # MAKEFLAGS=-j10 00:08:07.065 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@296 -- # export HUGEMEM=4096 00:08:07.065 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@296 -- # HUGEMEM=4096 00:08:07.065 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@298 -- # NO_HUGE=() 00:08:07.065 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@299 -- # TEST_MODE= 00:08:07.065 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@318 -- # [[ -z 64603 ]] 00:08:07.065 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@318 -- # kill -0 64603 00:08:07.065 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1680 -- # set_test_storage 2147483648 00:08:07.065 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@328 -- # [[ -v testdir ]] 00:08:07.065 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@330 -- # local requested_size=2147483648 00:08:07.065 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@331 -- # local mount target_dir 00:08:07.065 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@333 -- # local -A mounts fss sizes avails uses 00:08:07.065 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@334 -- # local source fs size avail mount use 00:08:07.065 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@336 -- # local storage_fallback storage_candidates 00:08:07.065 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@338 -- # mktemp -udt spdk.XXXXXX 00:08:07.065 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@338 -- # storage_fallback=/tmp/spdk.UHj1v9 00:08:07.065 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@343 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:08:07.065 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@345 -- # [[ -n '' ]] 00:08:07.065 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@350 -- # [[ -n '' ]] 00:08:07.065 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@355 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/filesystem /tmp/spdk.UHj1v9/tests/filesystem /tmp/spdk.UHj1v9 00:08:07.065 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@358 -- # requested_size=2214592512 00:08:07.065 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:08:07.065 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@327 -- # grep -v Filesystem 00:08:07.066 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@327 -- # df -T 00:08:07.066 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=devtmpfs 00:08:07.066 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=devtmpfs 00:08:07.066 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=4194304 00:08:07.066 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=4194304 00:08:07.066 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:08:07.066 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:08:07.066 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:08:07.066 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:08:07.066 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=6263177216 00:08:07.066 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=6267887616 00:08:07.066 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4710400 00:08:07.066 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:08:07.066 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:08:07.066 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:08:07.066 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=2496167936 00:08:07.066 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=2507157504 00:08:07.066 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=10989568 00:08:07.066 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:08:07.066 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/vda5 00:08:07.066 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=btrfs 00:08:07.066 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=13788835840 00:08:07.066 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=20314062848 00:08:07.066 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=5240360960 00:08:07.066 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:08:07.066 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/vda5 00:08:07.066 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=btrfs 00:08:07.066 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=13788835840 00:08:07.066 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=20314062848 00:08:07.066 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=5240360960 00:08:07.066 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:08:07.066 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/vda2 00:08:07.066 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=ext4 00:08:07.066 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=843546624 00:08:07.066 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=1012768768 00:08:07.066 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=100016128 00:08:07.066 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:08:07.066 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:08:07.066 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:08:07.066 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=6267748352 00:08:07.066 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=6267891712 00:08:07.066 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=143360 00:08:07.066 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:08:07.066 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/vda3 00:08:07.066 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=vfat 00:08:07.066 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=92499968 00:08:07.066 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=104607744 00:08:07.066 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=12107776 00:08:07.066 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:08:07.066 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:08:07.066 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:08:07.066 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=1253572608 00:08:07.066 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=1253576704 00:08:07.066 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4096 00:08:07.066 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:08:07.066 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/iscsi-vg-autotest/fedora38-libvirt/output 00:08:07.066 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=fuse.sshfs 00:08:07.066 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=98743451648 00:08:07.066 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=105088212992 00:08:07.066 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=959328256 00:08:07.066 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:08:07.066 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@366 -- # printf '* Looking for test storage...\n' 00:08:07.066 * Looking for test storage... 00:08:07.066 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@368 -- # local target_space new_size 00:08:07.066 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@369 -- # for target_dir in "${storage_candidates[@]}" 00:08:07.066 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@372 -- # df /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/filesystem 00:08:07.066 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@372 -- # awk '$1 !~ /Filesystem/{print $6}' 00:08:07.066 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@372 -- # mount=/home 00:08:07.066 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@374 -- # target_space=13788835840 00:08:07.066 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@375 -- # (( target_space == 0 || target_space < requested_size )) 00:08:07.066 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@378 -- # (( target_space >= requested_size )) 00:08:07.066 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@380 -- # [[ btrfs == tmpfs ]] 00:08:07.066 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@380 -- # [[ btrfs == ramfs ]] 00:08:07.066 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@380 -- # [[ /home == / ]] 00:08:07.066 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@387 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/iscsi_tgt/filesystem 00:08:07.066 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@387 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/iscsi_tgt/filesystem 00:08:07.066 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@388 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/filesystem 00:08:07.066 * Found test storage at /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/filesystem 00:08:07.066 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@389 -- # return 0 00:08:07.066 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1682 -- # set -o errtrace 00:08:07.066 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1683 -- # shopt -s extdebug 00:08:07.066 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1684 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:08:07.066 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1686 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:08:07.066 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1687 -- # true 00:08:07.067 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1689 -- # xtrace_fd 00:08:07.067 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:08:07.067 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:08:07.067 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@27 -- # exec 00:08:07.067 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@29 -- # exec 00:08:07.067 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:08:07.067 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:08:07.067 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:08:07.067 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@18 -- # set -x 00:08:07.067 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:08:07.067 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:08:07.067 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:08:07.067 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:08:07.067 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:08:07.067 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:08:07.067 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:08:07.067 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:08:07.067 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:08:07.067 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:08:07.067 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:08:07.067 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:08:07.067 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:08:07.067 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:08:07.067 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:08:07.067 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:08:07.067 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:08:07.067 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:08:07.067 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:08:07.067 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:08:07.067 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@11 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:07.067 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:07.067 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:07.067 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:07.067 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:07.067 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:07.067 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:07.067 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- paths/export.sh@5 -- # export PATH 00:08:07.067 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:07.067 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@13 -- # iscsitestinit 00:08:07.067 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@109 -- # '[' '' == iso ']' 00:08:07.067 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@29 -- # timing_enter start_iscsi_tgt 00:08:07.067 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:07.067 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@10 -- # set +x 00:08:07.067 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@32 -- # pid=64640 00:08:07.067 Process pid: 64640 00:08:07.067 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@33 -- # echo 'Process pid: 64640' 00:08:07.067 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@35 -- # trap 'killprocess $pid; iscsitestfini; exit 1' SIGINT SIGTERM EXIT 00:08:07.067 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@37 -- # waitforlisten 64640 00:08:07.067 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@829 -- # '[' -z 64640 ']' 00:08:07.067 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@31 -- # ip netns exec spdk_iscsi_ns /home/vagrant/spdk_repo/spdk/build/bin/iscsi_tgt -m 0xF --wait-for-rpc 00:08:07.067 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:07.067 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:07.067 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:07.067 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:07.067 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:07.067 02:03:15 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@10 -- # set +x 00:08:07.067 [2024-07-23 02:03:15.652332] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:08:07.067 [2024-07-23 02:03:15.652534] [ DPDK EAL parameters: iscsi --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64640 ] 00:08:07.067 [2024-07-23 02:03:15.812595] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:07.325 [2024-07-23 02:03:16.015720] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:07.325 [2024-07-23 02:03:16.015850] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:07.325 [2024-07-23 02:03:16.015962] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:07.325 [2024-07-23 02:03:16.015984] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:07.891 02:03:16 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:07.891 02:03:16 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@862 -- # return 0 00:08:07.891 02:03:16 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@38 -- # rpc_cmd iscsi_set_options -o 30 -a 16 00:08:07.891 02:03:16 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.891 02:03:16 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@10 -- # set +x 00:08:07.891 02:03:16 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:07.891 02:03:16 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@39 -- # rpc_cmd framework_start_init 00:08:07.891 02:03:16 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:07.891 02:03:16 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@10 -- # set +x 00:08:08.457 02:03:17 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:08.457 iscsi_tgt is listening. Running tests... 00:08:08.457 02:03:17 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@40 -- # echo 'iscsi_tgt is listening. Running tests...' 00:08:08.457 02:03:17 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@42 -- # timing_exit start_iscsi_tgt 00:08:08.457 02:03:17 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:08.457 02:03:17 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@10 -- # set +x 00:08:08.457 02:03:17 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@44 -- # get_first_nvme_bdf 00:08:08.457 02:03:17 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1524 -- # bdfs=() 00:08:08.457 02:03:17 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1524 -- # local bdfs 00:08:08.457 02:03:17 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1525 -- # bdfs=($(get_nvme_bdfs)) 00:08:08.457 02:03:17 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1525 -- # get_nvme_bdfs 00:08:08.457 02:03:17 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1513 -- # bdfs=() 00:08:08.457 02:03:17 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1513 -- # local bdfs 00:08:08.457 02:03:17 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:08.457 02:03:17 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:08.457 02:03:17 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:08:08.717 02:03:17 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1515 -- # (( 2 == 0 )) 00:08:08.717 02:03:17 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:08:08.717 02:03:17 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1527 -- # echo 0000:00:10.0 00:08:08.717 02:03:17 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@44 -- # bdf=0000:00:10.0 00:08:08.717 02:03:17 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@45 -- # rpc_cmd iscsi_create_portal_group 1 10.0.0.1:3260 00:08:08.717 02:03:17 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:08.717 02:03:17 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@10 -- # set +x 00:08:08.717 02:03:17 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:08.717 02:03:17 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@46 -- # rpc_cmd iscsi_create_initiator_group 2 ANY 10.0.0.2/32 00:08:08.717 02:03:17 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:08.717 02:03:17 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@10 -- # set +x 00:08:08.717 02:03:17 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:08.717 02:03:17 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@47 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t pcie -a 0000:00:10.0 00:08:08.717 02:03:17 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:08.717 02:03:17 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@10 -- # set +x 00:08:08.717 Nvme0n1 00:08:08.717 02:03:17 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:08.717 02:03:17 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@49 -- # rpc_cmd bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:08:08.717 02:03:17 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:08.717 02:03:17 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@10 -- # set +x 00:08:08.717 02:03:17 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:08.717 02:03:17 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@49 -- # ls_guid=300b9fc6-7ffd-4eaf-ab1f-e4a45446cac4 00:08:08.717 02:03:17 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@50 -- # get_lvs_free_mb 300b9fc6-7ffd-4eaf-ab1f-e4a45446cac4 00:08:08.717 02:03:17 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1364 -- # local lvs_uuid=300b9fc6-7ffd-4eaf-ab1f-e4a45446cac4 00:08:08.717 02:03:17 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1365 -- # local lvs_info 00:08:08.717 02:03:17 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1366 -- # local fc 00:08:08.717 02:03:17 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1367 -- # local cs 00:08:08.717 02:03:17 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1368 -- # rpc_cmd bdev_lvol_get_lvstores 00:08:08.717 02:03:17 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:08.717 02:03:17 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@10 -- # set +x 00:08:08.717 02:03:17 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:08.717 02:03:17 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:08:08.717 { 00:08:08.717 "uuid": "300b9fc6-7ffd-4eaf-ab1f-e4a45446cac4", 00:08:08.717 "name": "lvs_0", 00:08:08.717 "base_bdev": "Nvme0n1", 00:08:08.717 "total_data_clusters": 1278, 00:08:08.717 "free_clusters": 1278, 00:08:08.717 "block_size": 4096, 00:08:08.717 "cluster_size": 4194304 00:08:08.717 } 00:08:08.717 ]' 00:08:08.717 02:03:17 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="300b9fc6-7ffd-4eaf-ab1f-e4a45446cac4") .free_clusters' 00:08:08.717 02:03:17 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1369 -- # fc=1278 00:08:08.717 02:03:17 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="300b9fc6-7ffd-4eaf-ab1f-e4a45446cac4") .cluster_size' 00:08:08.717 02:03:17 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1370 -- # cs=4194304 00:08:08.717 02:03:17 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1373 -- # free_mb=5112 00:08:08.717 02:03:17 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1374 -- # echo 5112 00:08:08.717 02:03:17 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@50 -- # free_mb=5112 00:08:08.717 02:03:17 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@52 -- # '[' 5112 -gt 2048 ']' 00:08:08.717 02:03:17 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@53 -- # rpc_cmd bdev_lvol_create -u 300b9fc6-7ffd-4eaf-ab1f-e4a45446cac4 lbd_0 2048 00:08:08.717 02:03:17 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:08.717 02:03:17 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@10 -- # set +x 00:08:08.717 a97c5fa0-c5f9-4d78-82e4-e50dd4500541 00:08:08.717 02:03:17 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:08.717 02:03:17 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@61 -- # lvol_name=lvs_0/lbd_0 00:08:08.717 02:03:17 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@62 -- # rpc_cmd iscsi_create_target_node Target1 Target1_alias lvs_0/lbd_0:0 1:2 256 -d 00:08:08.717 02:03:17 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:08.717 02:03:17 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@10 -- # set +x 00:08:08.717 02:03:17 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:08.717 02:03:17 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@63 -- # sleep 1 00:08:10.092 02:03:18 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@65 -- # iscsiadm -m discovery -t sendtargets -p 10.0.0.1:3260 00:08:10.092 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target1 00:08:10.092 02:03:18 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@66 -- # iscsiadm -m node --login -p 10.0.0.1:3260 00:08:10.092 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] 00:08:10.092 Login to [iface: default, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] successful. 00:08:10.093 02:03:18 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@67 -- # waitforiscsidevices 1 00:08:10.093 02:03:18 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@116 -- # local num=1 00:08:10.093 02:03:18 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@118 -- # (( i = 1 )) 00:08:10.093 02:03:18 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@118 -- # (( i <= 20 )) 00:08:10.093 02:03:18 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@119 -- # iscsiadm -m session -P 3 00:08:10.093 02:03:18 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@119 -- # grep -c 'Attached scsi disk sd[a-z]*' 00:08:10.093 [2024-07-23 02:03:18.604324] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:08:10.093 02:03:18 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@119 -- # n=1 00:08:10.093 02:03:18 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@120 -- # '[' 1 -ne 1 ']' 00:08:10.093 02:03:18 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@123 -- # return 0 00:08:10.093 02:03:18 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@69 -- # get_bdev_size lvs_0/lbd_0 00:08:10.093 02:03:18 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1378 -- # local bdev_name=lvs_0/lbd_0 00:08:10.093 02:03:18 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1379 -- # local bdev_info 00:08:10.093 02:03:18 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1380 -- # local bs 00:08:10.093 02:03:18 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1381 -- # local nb 00:08:10.093 02:03:18 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b lvs_0/lbd_0 00:08:10.093 02:03:18 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:10.093 02:03:18 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@10 -- # set +x 00:08:10.093 02:03:18 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:10.093 02:03:18 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:08:10.093 { 00:08:10.093 "name": "a97c5fa0-c5f9-4d78-82e4-e50dd4500541", 00:08:10.093 "aliases": [ 00:08:10.093 "lvs_0/lbd_0" 00:08:10.093 ], 00:08:10.093 "product_name": "Logical Volume", 00:08:10.093 "block_size": 4096, 00:08:10.093 "num_blocks": 524288, 00:08:10.093 "uuid": "a97c5fa0-c5f9-4d78-82e4-e50dd4500541", 00:08:10.093 "assigned_rate_limits": { 00:08:10.093 "rw_ios_per_sec": 0, 00:08:10.093 "rw_mbytes_per_sec": 0, 00:08:10.093 "r_mbytes_per_sec": 0, 00:08:10.093 "w_mbytes_per_sec": 0 00:08:10.093 }, 00:08:10.093 "claimed": false, 00:08:10.093 "zoned": false, 00:08:10.093 "supported_io_types": { 00:08:10.093 "read": true, 00:08:10.093 "write": true, 00:08:10.093 "unmap": true, 00:08:10.093 "flush": false, 00:08:10.093 "reset": true, 00:08:10.093 "nvme_admin": false, 00:08:10.093 "nvme_io": false, 00:08:10.093 "nvme_io_md": false, 00:08:10.093 "write_zeroes": true, 00:08:10.093 "zcopy": false, 00:08:10.093 "get_zone_info": false, 00:08:10.093 "zone_management": false, 00:08:10.093 "zone_append": false, 00:08:10.093 "compare": false, 00:08:10.093 "compare_and_write": false, 00:08:10.093 "abort": false, 00:08:10.093 "seek_hole": true, 00:08:10.093 "seek_data": true, 00:08:10.093 "copy": false, 00:08:10.093 "nvme_iov_md": false 00:08:10.093 }, 00:08:10.093 "driver_specific": { 00:08:10.093 "lvol": { 00:08:10.093 "lvol_store_uuid": "300b9fc6-7ffd-4eaf-ab1f-e4a45446cac4", 00:08:10.093 "base_bdev": "Nvme0n1", 00:08:10.093 "thin_provision": false, 00:08:10.093 "num_allocated_clusters": 512, 00:08:10.093 "snapshot": false, 00:08:10.093 "clone": false, 00:08:10.093 "esnap_clone": false 00:08:10.093 } 00:08:10.093 } 00:08:10.093 } 00:08:10.093 ]' 00:08:10.093 02:03:18 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:08:10.093 02:03:18 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1383 -- # bs=4096 00:08:10.093 02:03:18 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:08:10.093 02:03:18 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1384 -- # nb=524288 00:08:10.093 02:03:18 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1387 -- # bdev_size=2048 00:08:10.093 02:03:18 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1388 -- # echo 2048 00:08:10.093 02:03:18 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@69 -- # lvol_size=2147483648 00:08:10.093 02:03:18 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@70 -- # trap 'iscsicleanup; remove_backends; umount /mnt/device; rm -rf /mnt/device; killprocess $pid; iscsitestfini; exit 1' SIGINT SIGTERM EXIT 00:08:10.093 02:03:18 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@72 -- # mkdir -p /mnt/device 00:08:10.093 02:03:18 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@74 -- # iscsiadm -m session -P 3 00:08:10.093 02:03:18 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@74 -- # grep 'Attached scsi disk' 00:08:10.093 02:03:18 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@74 -- # awk '{print $4}' 00:08:10.093 02:03:18 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@74 -- # dev=sda 00:08:10.093 02:03:18 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@76 -- # waitforfile /dev/sda 00:08:10.093 02:03:18 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1265 -- # local i=0 00:08:10.093 02:03:18 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1266 -- # '[' '!' -e /dev/sda ']' 00:08:10.093 02:03:18 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1272 -- # '[' '!' -e /dev/sda ']' 00:08:10.093 02:03:18 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1276 -- # return 0 00:08:10.093 02:03:18 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@78 -- # sec_size_to_bytes sda 00:08:10.093 02:03:18 iscsi_tgt.iscsi_tgt_filesystem -- setup/common.sh@76 -- # local dev=sda 00:08:10.093 02:03:18 iscsi_tgt.iscsi_tgt_filesystem -- setup/common.sh@78 -- # [[ -e /sys/block/sda ]] 00:08:10.093 02:03:18 iscsi_tgt.iscsi_tgt_filesystem -- setup/common.sh@80 -- # echo 2147483648 00:08:10.093 02:03:18 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@78 -- # dev_size=2147483648 00:08:10.093 02:03:18 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@80 -- # (( lvol_size == dev_size )) 00:08:10.093 02:03:18 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@81 -- # parted -s /dev/sda mklabel gpt mkpart SPDK_TEST 0% 100% 00:08:10.093 02:03:18 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@82 -- # sleep 1 00:08:10.093 [2024-07-23 02:03:18.793748] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:08:11.029 02:03:19 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@144 -- # run_test iscsi_tgt_filesystem_ext4 filesystem_test ext4 00:08:11.029 02:03:19 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:11.029 02:03:19 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:11.029 02:03:19 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@10 -- # set +x 00:08:11.287 ************************************ 00:08:11.287 START TEST iscsi_tgt_filesystem_ext4 00:08:11.287 ************************************ 00:08:11.287 02:03:19 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- common/autotest_common.sh@1123 -- # filesystem_test ext4 00:08:11.287 02:03:19 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- filesystem/filesystem.sh@89 -- # fstype=ext4 00:08:11.287 02:03:19 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- filesystem/filesystem.sh@91 -- # make_filesystem ext4 /dev/sda1 00:08:11.287 02:03:19 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:08:11.287 02:03:19 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/sda1 00:08:11.287 02:03:19 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:08:11.287 02:03:19 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- common/autotest_common.sh@927 -- # local force 00:08:11.287 02:03:19 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:08:11.287 02:03:19 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:08:11.287 02:03:19 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/sda1 00:08:11.287 mke2fs 1.46.5 (30-Dec-2021) 00:08:11.287 Discarding device blocks: 0/522240 done 00:08:11.287 Creating filesystem with 522240 4k blocks and 130560 inodes 00:08:11.287 Filesystem UUID: 5192c1ef-aa4f-4b5e-a636-26773d03cc9c 00:08:11.287 Superblock backups stored on blocks: 00:08:11.287 32768, 98304, 163840, 229376, 294912 00:08:11.287 00:08:11.287 Allocating group tables: 0/16 done 00:08:11.287 Writing inode tables: 0/16 done 00:08:11.546 Creating journal (8192 blocks): done 00:08:11.546 Writing superblocks and filesystem accounting information: 0/16 done 00:08:11.546 00:08:11.546 02:03:20 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- common/autotest_common.sh@943 -- # return 0 00:08:11.546 02:03:20 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- filesystem/filesystem.sh@92 -- # mount /dev/sda1 /mnt/device 00:08:11.546 02:03:20 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- filesystem/filesystem.sh@93 -- # '[' 1 -eq 1 ']' 00:08:11.546 02:03:20 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- filesystem/filesystem.sh@94 -- # fio -filename=/mnt/device/test -direct=1 -iodepth 64 -thread=1 -invalidate=1 -rw=randwrite -ioengine=libaio -bs=4k -size=1024M -name=job0 00:08:11.546 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=64 00:08:11.546 fio-3.35 00:08:11.546 Starting 1 thread 00:08:11.546 job0: Laying out IO file (1 file / 1024MiB) 00:08:33.483 00:08:33.483 job0: (groupid=0, jobs=1): err= 0: pid=64806: Tue Jul 23 02:03:39 2024 00:08:33.483 write: IOPS=13.6k, BW=53.2MiB/s (55.8MB/s)(1024MiB/19248msec); 0 zone resets 00:08:33.483 slat (usec): min=5, max=33609, avg=22.27, stdev=173.46 00:08:33.483 clat (usec): min=1312, max=47498, avg=4674.61, stdev=2003.60 00:08:33.483 lat (usec): min=1330, max=47722, avg=4696.88, stdev=2018.25 00:08:33.483 clat percentiles (usec): 00:08:33.483 | 1.00th=[ 2442], 5.00th=[ 2704], 10.00th=[ 3032], 20.00th=[ 3458], 00:08:33.483 | 30.00th=[ 3982], 40.00th=[ 4359], 50.00th=[ 4555], 60.00th=[ 4883], 00:08:33.483 | 70.00th=[ 5145], 80.00th=[ 5473], 90.00th=[ 6128], 95.00th=[ 6652], 00:08:33.483 | 99.00th=[ 7308], 99.50th=[10159], 99.90th=[28705], 99.95th=[42206], 00:08:33.483 | 99.99th=[45351] 00:08:33.483 bw ( KiB/s): min=47368, max=57736, per=100.00%, avg=54583.58, stdev=3110.61, samples=38 00:08:33.483 iops : min=11842, max=14432, avg=13645.95, stdev=777.66, samples=38 00:08:33.483 lat (msec) : 2=0.05%, 4=30.15%, 10=69.29%, 20=0.09%, 50=0.41% 00:08:33.483 cpu : usr=5.23%, sys=18.09%, ctx=23197, majf=0, minf=1 00:08:33.483 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:08:33.483 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:33.483 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:08:33.483 issued rwts: total=0,262144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:33.483 latency : target=0, window=0, percentile=100.00%, depth=64 00:08:33.483 00:08:33.483 Run status group 0 (all jobs): 00:08:33.483 WRITE: bw=53.2MiB/s (55.8MB/s), 53.2MiB/s-53.2MiB/s (55.8MB/s-55.8MB/s), io=1024MiB (1074MB), run=19248-19248msec 00:08:33.483 00:08:33.483 Disk stats (read/write): 00:08:33.483 sda: ios=0/259755, merge=0/2652, ticks=0/1096400, in_queue=1096401, util=99.49% 00:08:33.483 02:03:39 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- filesystem/filesystem.sh@96 -- # umount /mnt/device 00:08:33.483 02:03:39 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- filesystem/filesystem.sh@98 -- # iscsiadm -m node --logout 00:08:33.483 Logging out of session [sid: 1, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] 00:08:33.483 Logout of [sid: 1, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] successful. 00:08:33.483 02:03:39 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- filesystem/filesystem.sh@99 -- # waitforiscsidevices 0 00:08:33.483 02:03:39 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- iscsi_tgt/common.sh@116 -- # local num=0 00:08:33.483 02:03:39 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- iscsi_tgt/common.sh@118 -- # (( i = 1 )) 00:08:33.483 02:03:39 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- iscsi_tgt/common.sh@118 -- # (( i <= 20 )) 00:08:33.483 02:03:39 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- iscsi_tgt/common.sh@119 -- # grep -c 'Attached scsi disk sd[a-z]*' 00:08:33.483 02:03:39 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- iscsi_tgt/common.sh@119 -- # iscsiadm -m session -P 3 00:08:33.483 iscsiadm: No active sessions. 00:08:33.483 02:03:39 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- iscsi_tgt/common.sh@119 -- # true 00:08:33.483 02:03:39 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- iscsi_tgt/common.sh@119 -- # n=0 00:08:33.483 02:03:39 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- iscsi_tgt/common.sh@120 -- # '[' 0 -ne 0 ']' 00:08:33.483 02:03:39 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- iscsi_tgt/common.sh@123 -- # return 0 00:08:33.483 02:03:39 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- filesystem/filesystem.sh@100 -- # iscsiadm -m node --login -p 10.0.0.1:3260 00:08:33.483 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] 00:08:33.483 Login to [iface: default, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] successful. 00:08:33.483 02:03:39 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- filesystem/filesystem.sh@101 -- # waitforiscsidevices 1 00:08:33.483 02:03:39 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- iscsi_tgt/common.sh@116 -- # local num=1 00:08:33.483 02:03:39 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- iscsi_tgt/common.sh@118 -- # (( i = 1 )) 00:08:33.483 02:03:39 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- iscsi_tgt/common.sh@118 -- # (( i <= 20 )) 00:08:33.483 02:03:39 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- iscsi_tgt/common.sh@119 -- # grep -c 'Attached scsi disk sd[a-z]*' 00:08:33.483 02:03:39 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- iscsi_tgt/common.sh@119 -- # iscsiadm -m session -P 3 00:08:33.483 [2024-07-23 02:03:39.811114] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:08:33.483 02:03:39 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- iscsi_tgt/common.sh@119 -- # n=1 00:08:33.483 02:03:39 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- iscsi_tgt/common.sh@120 -- # '[' 1 -ne 1 ']' 00:08:33.483 02:03:39 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- iscsi_tgt/common.sh@123 -- # return 0 00:08:33.483 02:03:39 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- filesystem/filesystem.sh@103 -- # iscsiadm -m session -P 3 00:08:33.483 02:03:39 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- filesystem/filesystem.sh@103 -- # grep 'Attached scsi disk' 00:08:33.483 02:03:39 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- filesystem/filesystem.sh@103 -- # awk '{print $4}' 00:08:33.483 02:03:39 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- filesystem/filesystem.sh@103 -- # dev=sda 00:08:33.483 02:03:39 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- filesystem/filesystem.sh@105 -- # waitforfile /dev/sda1 00:08:33.483 02:03:39 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- common/autotest_common.sh@1265 -- # local i=0 00:08:33.483 02:03:39 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- common/autotest_common.sh@1266 -- # '[' '!' -e /dev/sda1 ']' 00:08:33.483 02:03:39 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- common/autotest_common.sh@1272 -- # '[' '!' -e /dev/sda1 ']' 00:08:33.483 02:03:39 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- common/autotest_common.sh@1276 -- # return 0 00:08:33.483 02:03:39 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- filesystem/filesystem.sh@106 -- # mount -o rw /dev/sda1 /mnt/device 00:08:33.483 02:03:39 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- filesystem/filesystem.sh@107 -- # '[' -f /mnt/device/test ']' 00:08:33.483 File existed. 00:08:33.483 02:03:39 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- filesystem/filesystem.sh@108 -- # echo 'File existed.' 00:08:33.483 02:03:39 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- filesystem/filesystem.sh@109 -- # fio -filename=/mnt/device/test -direct=1 -iodepth 64 -thread=1 -invalidate=1 -rw=randread -ioengine=libaio -bs=4k -runtime=20 -time_based=1 -name=job0 00:08:33.483 job0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=64 00:08:33.483 fio-3.35 00:08:33.483 Starting 1 thread 00:08:51.577 00:08:51.577 job0: (groupid=0, jobs=1): err= 0: pid=65161: Tue Jul 23 02:04:00 2024 00:08:51.577 read: IOPS=13.8k, BW=53.9MiB/s (56.5MB/s)(1078MiB/20004msec) 00:08:51.577 slat (usec): min=3, max=4382, avg= 9.40, stdev=55.70 00:08:51.577 clat (usec): min=716, max=35062, avg=4626.15, stdev=1461.58 00:08:51.577 lat (usec): min=744, max=36110, avg=4635.55, stdev=1472.78 00:08:51.577 clat percentiles (usec): 00:08:51.577 | 1.00th=[ 2409], 5.00th=[ 2933], 10.00th=[ 3097], 20.00th=[ 3359], 00:08:51.577 | 30.00th=[ 3884], 40.00th=[ 4146], 50.00th=[ 4621], 60.00th=[ 4883], 00:08:51.577 | 70.00th=[ 5276], 80.00th=[ 5669], 90.00th=[ 6194], 95.00th=[ 6521], 00:08:51.577 | 99.00th=[ 7308], 99.50th=[ 8848], 99.90th=[21627], 99.95th=[27132], 00:08:51.577 | 99.99th=[30802] 00:08:51.577 bw ( KiB/s): min=24464, max=63336, per=100.00%, avg=55214.15, stdev=5262.67, samples=39 00:08:51.577 iops : min= 6116, max=15834, avg=13803.54, stdev=1315.67, samples=39 00:08:51.577 lat (usec) : 750=0.01%, 1000=0.01% 00:08:51.577 lat (msec) : 2=0.10%, 4=35.02%, 10=64.51%, 20=0.26%, 50=0.11% 00:08:51.577 cpu : usr=4.67%, sys=9.76%, ctx=26363, majf=0, minf=65 00:08:51.577 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:08:51.577 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:51.577 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:08:51.577 issued rwts: total=275849,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:51.577 latency : target=0, window=0, percentile=100.00%, depth=64 00:08:51.577 00:08:51.577 Run status group 0 (all jobs): 00:08:51.577 READ: bw=53.9MiB/s (56.5MB/s), 53.9MiB/s-53.9MiB/s (56.5MB/s-56.5MB/s), io=1078MiB (1130MB), run=20004-20004msec 00:08:51.577 00:08:51.577 Disk stats (read/write): 00:08:51.577 sda: ios=273499/5, merge=1399/2, ticks=1181790/7, in_queue=1181797, util=99.60% 00:08:51.577 02:04:00 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- filesystem/filesystem.sh@116 -- # rm -rf /mnt/device/test 00:08:51.577 02:04:00 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- filesystem/filesystem.sh@117 -- # umount /mnt/device 00:08:51.577 00:08:51.577 real 0m40.356s 00:08:51.577 user 0m2.203s 00:08:51.577 sys 0m5.704s 00:08:51.577 02:04:00 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:51.577 02:04:00 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:08:51.577 ************************************ 00:08:51.577 END TEST iscsi_tgt_filesystem_ext4 00:08:51.577 ************************************ 00:08:51.577 02:04:00 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:08:51.577 02:04:00 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@145 -- # run_test iscsi_tgt_filesystem_btrfs filesystem_test btrfs 00:08:51.577 02:04:00 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:51.577 02:04:00 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:51.577 02:04:00 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@10 -- # set +x 00:08:51.577 ************************************ 00:08:51.577 START TEST iscsi_tgt_filesystem_btrfs 00:08:51.577 ************************************ 00:08:51.577 02:04:00 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- common/autotest_common.sh@1123 -- # filesystem_test btrfs 00:08:51.577 02:04:00 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- filesystem/filesystem.sh@89 -- # fstype=btrfs 00:08:51.577 02:04:00 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- filesystem/filesystem.sh@91 -- # make_filesystem btrfs /dev/sda1 00:08:51.577 02:04:00 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:08:51.577 02:04:00 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/sda1 00:08:51.577 02:04:00 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:08:51.577 02:04:00 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- common/autotest_common.sh@927 -- # local force 00:08:51.577 02:04:00 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:08:51.577 02:04:00 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:08:51.577 02:04:00 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/sda1 00:08:51.836 btrfs-progs v6.6.2 00:08:51.836 See https://btrfs.readthedocs.io for more information. 00:08:51.836 00:08:51.836 Performing full device TRIM /dev/sda1 (1.99GiB) ... 00:08:51.836 NOTE: several default settings have changed in version 5.15, please make sure 00:08:51.836 this does not affect your deployments: 00:08:51.836 - DUP for metadata (-m dup) 00:08:51.836 - enabled no-holes (-O no-holes) 00:08:51.836 - enabled free-space-tree (-R free-space-tree) 00:08:51.836 00:08:51.837 Label: (null) 00:08:51.837 UUID: c249cfa7-1f56-47fe-8e09-ac9f63c6d317 00:08:51.837 Node size: 16384 00:08:51.837 Sector size: 4096 00:08:51.837 Filesystem size: 1.99GiB 00:08:51.837 Block group profiles: 00:08:51.837 Data: single 8.00MiB 00:08:51.837 Metadata: DUP 102.00MiB 00:08:51.837 System: DUP 8.00MiB 00:08:51.837 SSD detected: yes 00:08:51.837 Zoned device: no 00:08:51.837 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:08:51.837 Runtime features: free-space-tree 00:08:51.837 Checksum: crc32c 00:08:51.837 Number of devices: 1 00:08:51.837 Devices: 00:08:51.837 ID SIZE PATH 00:08:51.837 1 1.99GiB /dev/sda1 00:08:51.837 00:08:51.837 02:04:00 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- common/autotest_common.sh@943 -- # return 0 00:08:51.837 02:04:00 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- filesystem/filesystem.sh@92 -- # mount /dev/sda1 /mnt/device 00:08:51.837 02:04:00 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- filesystem/filesystem.sh@93 -- # '[' 1 -eq 1 ']' 00:08:51.837 02:04:00 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- filesystem/filesystem.sh@94 -- # fio -filename=/mnt/device/test -direct=1 -iodepth 64 -thread=1 -invalidate=1 -rw=randwrite -ioengine=libaio -bs=4k -size=1024M -name=job0 00:08:51.837 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=64 00:08:51.837 fio-3.35 00:08:51.837 Starting 1 thread 00:08:51.837 job0: Laying out IO file (1 file / 1024MiB) 00:09:13.813 00:09:13.813 job0: (groupid=0, jobs=1): err= 0: pid=65422: Tue Jul 23 02:04:19 2024 00:09:13.813 write: IOPS=13.7k, BW=53.5MiB/s (56.1MB/s)(1024MiB/19146msec); 0 zone resets 00:09:13.813 slat (usec): min=7, max=4249, avg=44.73, stdev=91.94 00:09:13.813 clat (usec): min=1242, max=14666, avg=4627.16, stdev=1289.63 00:09:13.813 lat (usec): min=1277, max=14698, avg=4671.89, stdev=1297.55 00:09:13.813 clat percentiles (usec): 00:09:13.813 | 1.00th=[ 2147], 5.00th=[ 2606], 10.00th=[ 2999], 20.00th=[ 3490], 00:09:13.813 | 30.00th=[ 3916], 40.00th=[ 4293], 50.00th=[ 4621], 60.00th=[ 4948], 00:09:13.813 | 70.00th=[ 5211], 80.00th=[ 5538], 90.00th=[ 6325], 95.00th=[ 6849], 00:09:13.813 | 99.00th=[ 8094], 99.50th=[ 8848], 99.90th=[10290], 99.95th=[10945], 00:09:13.813 | 99.99th=[11994] 00:09:13.813 bw ( KiB/s): min=50504, max=59520, per=99.92%, avg=54723.16, stdev=1821.80, samples=38 00:09:13.813 iops : min=12626, max=14880, avg=13680.79, stdev=455.45, samples=38 00:09:13.813 lat (msec) : 2=0.45%, 4=31.69%, 10=67.73%, 20=0.13% 00:09:13.813 cpu : usr=5.75%, sys=32.78%, ctx=45690, majf=0, minf=1 00:09:13.813 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:09:13.813 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:13.813 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:09:13.813 issued rwts: total=0,262144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:13.813 latency : target=0, window=0, percentile=100.00%, depth=64 00:09:13.813 00:09:13.813 Run status group 0 (all jobs): 00:09:13.813 WRITE: bw=53.5MiB/s (56.1MB/s), 53.5MiB/s-53.5MiB/s (56.1MB/s-56.1MB/s), io=1024MiB (1074MB), run=19146-19146msec 00:09:13.813 02:04:19 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- filesystem/filesystem.sh@96 -- # umount /mnt/device 00:09:13.813 02:04:19 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- filesystem/filesystem.sh@98 -- # iscsiadm -m node --logout 00:09:13.813 Logging out of session [sid: 2, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] 00:09:13.813 Logout of [sid: 2, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] successful. 00:09:13.813 02:04:20 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- filesystem/filesystem.sh@99 -- # waitforiscsidevices 0 00:09:13.814 02:04:20 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- iscsi_tgt/common.sh@116 -- # local num=0 00:09:13.814 02:04:20 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- iscsi_tgt/common.sh@118 -- # (( i = 1 )) 00:09:13.814 02:04:20 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- iscsi_tgt/common.sh@118 -- # (( i <= 20 )) 00:09:13.814 02:04:20 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- iscsi_tgt/common.sh@119 -- # iscsiadm -m session -P 3 00:09:13.814 02:04:20 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- iscsi_tgt/common.sh@119 -- # grep -c 'Attached scsi disk sd[a-z]*' 00:09:13.814 iscsiadm: No active sessions. 00:09:13.814 02:04:20 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- iscsi_tgt/common.sh@119 -- # true 00:09:13.814 02:04:20 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- iscsi_tgt/common.sh@119 -- # n=0 00:09:13.814 02:04:20 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- iscsi_tgt/common.sh@120 -- # '[' 0 -ne 0 ']' 00:09:13.814 02:04:20 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- iscsi_tgt/common.sh@123 -- # return 0 00:09:13.814 02:04:20 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- filesystem/filesystem.sh@100 -- # iscsiadm -m node --login -p 10.0.0.1:3260 00:09:13.814 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] 00:09:13.814 Login to [iface: default, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] successful. 00:09:13.814 02:04:20 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- filesystem/filesystem.sh@101 -- # waitforiscsidevices 1 00:09:13.814 02:04:20 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- iscsi_tgt/common.sh@116 -- # local num=1 00:09:13.814 02:04:20 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- iscsi_tgt/common.sh@118 -- # (( i = 1 )) 00:09:13.814 02:04:20 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- iscsi_tgt/common.sh@118 -- # (( i <= 20 )) 00:09:13.814 02:04:20 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- iscsi_tgt/common.sh@119 -- # iscsiadm -m session -P 3 00:09:13.814 02:04:20 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- iscsi_tgt/common.sh@119 -- # grep -c 'Attached scsi disk sd[a-z]*' 00:09:13.814 [2024-07-23 02:04:20.061202] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:13.814 02:04:20 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- iscsi_tgt/common.sh@119 -- # n=1 00:09:13.814 02:04:20 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- iscsi_tgt/common.sh@120 -- # '[' 1 -ne 1 ']' 00:09:13.814 02:04:20 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- iscsi_tgt/common.sh@123 -- # return 0 00:09:13.814 02:04:20 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- filesystem/filesystem.sh@103 -- # iscsiadm -m session -P 3 00:09:13.814 02:04:20 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- filesystem/filesystem.sh@103 -- # grep 'Attached scsi disk' 00:09:13.814 02:04:20 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- filesystem/filesystem.sh@103 -- # awk '{print $4}' 00:09:13.814 02:04:20 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- filesystem/filesystem.sh@103 -- # dev=sda 00:09:13.814 02:04:20 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- filesystem/filesystem.sh@105 -- # waitforfile /dev/sda1 00:09:13.814 02:04:20 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- common/autotest_common.sh@1265 -- # local i=0 00:09:13.814 02:04:20 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- common/autotest_common.sh@1266 -- # '[' '!' -e /dev/sda1 ']' 00:09:13.814 02:04:20 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- common/autotest_common.sh@1272 -- # '[' '!' -e /dev/sda1 ']' 00:09:13.814 02:04:20 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- common/autotest_common.sh@1276 -- # return 0 00:09:13.814 02:04:20 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- filesystem/filesystem.sh@106 -- # mount -o rw /dev/sda1 /mnt/device 00:09:13.814 File existed. 00:09:13.814 02:04:20 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- filesystem/filesystem.sh@107 -- # '[' -f /mnt/device/test ']' 00:09:13.814 02:04:20 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- filesystem/filesystem.sh@108 -- # echo 'File existed.' 00:09:13.814 02:04:20 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- filesystem/filesystem.sh@109 -- # fio -filename=/mnt/device/test -direct=1 -iodepth 64 -thread=1 -invalidate=1 -rw=randread -ioengine=libaio -bs=4k -runtime=20 -time_based=1 -name=job0 00:09:13.814 job0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=64 00:09:13.814 fio-3.35 00:09:13.814 Starting 1 thread 00:09:31.946 00:09:31.946 job0: (groupid=0, jobs=1): err= 0: pid=65710: Tue Jul 23 02:04:40 2024 00:09:31.946 read: IOPS=16.1k, BW=62.8MiB/s (65.9MB/s)(1257MiB/20004msec) 00:09:31.946 slat (usec): min=2, max=2688, avg= 9.23, stdev=25.36 00:09:31.946 clat (usec): min=916, max=41020, avg=3964.16, stdev=1192.93 00:09:31.946 lat (usec): min=1255, max=42598, avg=3973.39, stdev=1200.92 00:09:31.946 clat percentiles (usec): 00:09:31.946 | 1.00th=[ 2147], 5.00th=[ 2474], 10.00th=[ 2638], 20.00th=[ 2966], 00:09:31.946 | 30.00th=[ 3294], 40.00th=[ 3621], 50.00th=[ 3916], 60.00th=[ 4228], 00:09:31.946 | 70.00th=[ 4555], 80.00th=[ 4883], 90.00th=[ 5276], 95.00th=[ 5604], 00:09:31.946 | 99.00th=[ 6194], 99.50th=[ 6456], 99.90th=[11469], 99.95th=[20841], 00:09:31.946 | 99.99th=[34866] 00:09:31.946 bw ( KiB/s): min=46128, max=75776, per=100.00%, avg=64414.56, stdev=4229.29, samples=39 00:09:31.946 iops : min=11532, max=18944, avg=16103.64, stdev=1057.32, samples=39 00:09:31.946 lat (usec) : 1000=0.01% 00:09:31.946 lat (msec) : 2=0.33%, 4=52.46%, 10=47.09%, 20=0.07%, 50=0.05% 00:09:31.946 cpu : usr=4.48%, sys=13.42%, ctx=44377, majf=0, minf=65 00:09:31.946 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:09:31.946 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:31.946 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:09:31.946 issued rwts: total=321748,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:31.946 latency : target=0, window=0, percentile=100.00%, depth=64 00:09:31.946 00:09:31.946 Run status group 0 (all jobs): 00:09:31.946 READ: bw=62.8MiB/s (65.9MB/s), 62.8MiB/s-62.8MiB/s (65.9MB/s-65.9MB/s), io=1257MiB (1318MB), run=20004-20004msec 00:09:31.946 02:04:40 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- filesystem/filesystem.sh@116 -- # rm -rf /mnt/device/test 00:09:31.946 02:04:40 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- filesystem/filesystem.sh@117 -- # umount /mnt/device 00:09:31.946 ************************************ 00:09:31.946 END TEST iscsi_tgt_filesystem_btrfs 00:09:31.946 ************************************ 00:09:31.946 00:09:31.946 real 0m40.217s 00:09:31.946 user 0m2.252s 00:09:31.946 sys 0m9.336s 00:09:31.946 02:04:40 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:31.946 02:04:40 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:09:31.946 02:04:40 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:09:31.946 02:04:40 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@146 -- # run_test iscsi_tgt_filesystem_xfs filesystem_test xfs 00:09:31.946 02:04:40 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:31.946 02:04:40 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:31.946 02:04:40 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@10 -- # set +x 00:09:31.946 ************************************ 00:09:31.946 START TEST iscsi_tgt_filesystem_xfs 00:09:31.946 ************************************ 00:09:31.946 02:04:40 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- common/autotest_common.sh@1123 -- # filesystem_test xfs 00:09:31.946 02:04:40 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- filesystem/filesystem.sh@89 -- # fstype=xfs 00:09:31.946 02:04:40 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- filesystem/filesystem.sh@91 -- # make_filesystem xfs /dev/sda1 00:09:31.946 02:04:40 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:09:31.946 02:04:40 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/sda1 00:09:31.946 02:04:40 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- common/autotest_common.sh@926 -- # local i=0 00:09:31.946 02:04:40 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- common/autotest_common.sh@927 -- # local force 00:09:31.946 02:04:40 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:09:31.946 02:04:40 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- common/autotest_common.sh@932 -- # force=-f 00:09:31.946 02:04:40 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/sda1 00:09:31.946 meta-data=/dev/sda1 isize=512 agcount=4, agsize=130560 blks 00:09:31.946 = sectsz=4096 attr=2, projid32bit=1 00:09:31.946 = crc=1 finobt=1, sparse=1, rmapbt=0 00:09:31.946 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:09:31.946 data = bsize=4096 blocks=522240, imaxpct=25 00:09:31.946 = sunit=0 swidth=0 blks 00:09:31.946 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:09:31.946 log =internal log bsize=4096 blocks=16384, version=2 00:09:31.946 = sectsz=4096 sunit=1 blks, lazy-count=1 00:09:31.946 realtime =none extsz=4096 blocks=0, rtextents=0 00:09:32.512 Discarding blocks...Done. 00:09:32.512 02:04:40 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- common/autotest_common.sh@943 -- # return 0 00:09:32.512 02:04:40 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- filesystem/filesystem.sh@92 -- # mount /dev/sda1 /mnt/device 00:09:33.078 02:04:41 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- filesystem/filesystem.sh@93 -- # '[' 1 -eq 1 ']' 00:09:33.078 02:04:41 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- filesystem/filesystem.sh@94 -- # fio -filename=/mnt/device/test -direct=1 -iodepth 64 -thread=1 -invalidate=1 -rw=randwrite -ioengine=libaio -bs=4k -size=1024M -name=job0 00:09:33.078 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=64 00:09:33.078 fio-3.35 00:09:33.078 Starting 1 thread 00:09:33.078 job0: Laying out IO file (1 file / 1024MiB) 00:09:51.172 00:09:51.172 job0: (groupid=0, jobs=1): err= 0: pid=65968: Tue Jul 23 02:04:59 2024 00:09:51.172 write: IOPS=15.1k, BW=58.9MiB/s (61.7MB/s)(1024MiB/17392msec); 0 zone resets 00:09:51.172 slat (usec): min=3, max=3316, avg=21.72, stdev=127.64 00:09:51.172 clat (usec): min=1138, max=10467, avg=4222.74, stdev=1050.97 00:09:51.172 lat (usec): min=1150, max=10476, avg=4244.46, stdev=1054.95 00:09:51.172 clat percentiles (usec): 00:09:51.172 | 1.00th=[ 2245], 5.00th=[ 2507], 10.00th=[ 2802], 20.00th=[ 3163], 00:09:51.172 | 30.00th=[ 3720], 40.00th=[ 4015], 50.00th=[ 4228], 60.00th=[ 4490], 00:09:51.172 | 70.00th=[ 4752], 80.00th=[ 5014], 90.00th=[ 5604], 95.00th=[ 6128], 00:09:51.172 | 99.00th=[ 6718], 99.50th=[ 6849], 99.90th=[ 7308], 99.95th=[ 7635], 00:09:51.172 | 99.99th=[ 8356] 00:09:51.172 bw ( KiB/s): min=56232, max=63536, per=99.93%, avg=60248.06, stdev=1523.37, samples=34 00:09:51.172 iops : min=14058, max=15884, avg=15062.00, stdev=380.83, samples=34 00:09:51.172 lat (msec) : 2=0.07%, 4=38.89%, 10=61.04%, 20=0.01% 00:09:51.172 cpu : usr=5.32%, sys=10.57%, ctx=22886, majf=0, minf=1 00:09:51.172 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:09:51.172 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:51.172 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:09:51.172 issued rwts: total=0,262144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:51.172 latency : target=0, window=0, percentile=100.00%, depth=64 00:09:51.172 00:09:51.172 Run status group 0 (all jobs): 00:09:51.172 WRITE: bw=58.9MiB/s (61.7MB/s), 58.9MiB/s-58.9MiB/s (61.7MB/s-61.7MB/s), io=1024MiB (1074MB), run=17392-17392msec 00:09:51.172 00:09:51.172 Disk stats (read/write): 00:09:51.172 sda: ios=0/260743, merge=0/1305, ticks=0/986734, in_queue=986734, util=99.55% 00:09:51.172 02:04:59 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- filesystem/filesystem.sh@96 -- # umount /mnt/device 00:09:51.172 02:04:59 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- filesystem/filesystem.sh@98 -- # iscsiadm -m node --logout 00:09:51.172 Logging out of session [sid: 3, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] 00:09:51.172 Logout of [sid: 3, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] successful. 00:09:51.172 02:04:59 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- filesystem/filesystem.sh@99 -- # waitforiscsidevices 0 00:09:51.172 02:04:59 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- iscsi_tgt/common.sh@116 -- # local num=0 00:09:51.172 02:04:59 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- iscsi_tgt/common.sh@118 -- # (( i = 1 )) 00:09:51.172 02:04:59 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- iscsi_tgt/common.sh@118 -- # (( i <= 20 )) 00:09:51.172 02:04:59 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- iscsi_tgt/common.sh@119 -- # iscsiadm -m session -P 3 00:09:51.172 02:04:59 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- iscsi_tgt/common.sh@119 -- # grep -c 'Attached scsi disk sd[a-z]*' 00:09:51.172 iscsiadm: No active sessions. 00:09:51.172 02:04:59 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- iscsi_tgt/common.sh@119 -- # true 00:09:51.172 02:04:59 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- iscsi_tgt/common.sh@119 -- # n=0 00:09:51.172 02:04:59 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- iscsi_tgt/common.sh@120 -- # '[' 0 -ne 0 ']' 00:09:51.172 02:04:59 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- iscsi_tgt/common.sh@123 -- # return 0 00:09:51.172 02:04:59 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- filesystem/filesystem.sh@100 -- # iscsiadm -m node --login -p 10.0.0.1:3260 00:09:51.172 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] 00:09:51.172 Login to [iface: default, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] successful. 00:09:51.172 02:04:59 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- filesystem/filesystem.sh@101 -- # waitforiscsidevices 1 00:09:51.172 02:04:59 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- iscsi_tgt/common.sh@116 -- # local num=1 00:09:51.172 02:04:59 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- iscsi_tgt/common.sh@118 -- # (( i = 1 )) 00:09:51.172 02:04:59 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- iscsi_tgt/common.sh@118 -- # (( i <= 20 )) 00:09:51.172 02:04:59 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- iscsi_tgt/common.sh@119 -- # grep -c 'Attached scsi disk sd[a-z]*' 00:09:51.172 02:04:59 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- iscsi_tgt/common.sh@119 -- # iscsiadm -m session -P 3 00:09:51.172 [2024-07-23 02:04:59.439208] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:09:51.172 02:04:59 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- iscsi_tgt/common.sh@119 -- # n=1 00:09:51.172 02:04:59 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- iscsi_tgt/common.sh@120 -- # '[' 1 -ne 1 ']' 00:09:51.172 02:04:59 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- iscsi_tgt/common.sh@123 -- # return 0 00:09:51.172 02:04:59 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- filesystem/filesystem.sh@103 -- # iscsiadm -m session -P 3 00:09:51.172 02:04:59 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- filesystem/filesystem.sh@103 -- # grep 'Attached scsi disk' 00:09:51.172 02:04:59 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- filesystem/filesystem.sh@103 -- # awk '{print $4}' 00:09:51.172 02:04:59 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- filesystem/filesystem.sh@103 -- # dev=sda 00:09:51.172 02:04:59 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- filesystem/filesystem.sh@105 -- # waitforfile /dev/sda1 00:09:51.172 02:04:59 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- common/autotest_common.sh@1265 -- # local i=0 00:09:51.172 02:04:59 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- common/autotest_common.sh@1266 -- # '[' '!' -e /dev/sda1 ']' 00:09:51.172 02:04:59 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- common/autotest_common.sh@1272 -- # '[' '!' -e /dev/sda1 ']' 00:09:51.172 02:04:59 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- common/autotest_common.sh@1276 -- # return 0 00:09:51.172 02:04:59 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- filesystem/filesystem.sh@106 -- # mount -o rw /dev/sda1 /mnt/device 00:09:51.172 02:04:59 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- filesystem/filesystem.sh@107 -- # '[' -f /mnt/device/test ']' 00:09:51.172 File existed. 00:09:51.172 02:04:59 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- filesystem/filesystem.sh@108 -- # echo 'File existed.' 00:09:51.172 02:04:59 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- filesystem/filesystem.sh@109 -- # fio -filename=/mnt/device/test -direct=1 -iodepth 64 -thread=1 -invalidate=1 -rw=randread -ioengine=libaio -bs=4k -runtime=20 -time_based=1 -name=job0 00:09:51.172 job0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=64 00:09:51.172 fio-3.35 00:09:51.172 Starting 1 thread 00:10:13.105 00:10:13.105 job0: (groupid=0, jobs=1): err= 0: pid=66214: Tue Jul 23 02:05:19 2024 00:10:13.105 read: IOPS=13.5k, BW=52.9MiB/s (55.4MB/s)(1057MiB/20005msec) 00:10:13.105 slat (usec): min=3, max=174, avg= 8.42, stdev= 7.89 00:10:13.105 clat (usec): min=1242, max=11298, avg=4719.56, stdev=1192.86 00:10:13.105 lat (usec): min=1353, max=11304, avg=4727.98, stdev=1192.50 00:10:13.105 clat percentiles (usec): 00:10:13.105 | 1.00th=[ 2769], 5.00th=[ 2999], 10.00th=[ 3130], 20.00th=[ 3458], 00:10:13.105 | 30.00th=[ 3949], 40.00th=[ 4359], 50.00th=[ 4686], 60.00th=[ 4948], 00:10:13.105 | 70.00th=[ 5407], 80.00th=[ 5800], 90.00th=[ 6390], 95.00th=[ 6718], 00:10:13.105 | 99.00th=[ 7373], 99.50th=[ 7504], 99.90th=[ 8225], 99.95th=[ 8717], 00:10:13.105 | 99.99th=[ 9765] 00:10:13.105 bw ( KiB/s): min=52040, max=60040, per=100.00%, avg=54222.87, stdev=1581.85, samples=39 00:10:13.105 iops : min=13010, max=15010, avg=13555.72, stdev=395.46, samples=39 00:10:13.105 lat (msec) : 2=0.01%, 4=32.62%, 10=67.37%, 20=0.01% 00:10:13.105 cpu : usr=5.33%, sys=11.18%, ctx=24885, majf=0, minf=65 00:10:13.105 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:10:13.105 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:13.105 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:10:13.106 issued rwts: total=270696,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:13.106 latency : target=0, window=0, percentile=100.00%, depth=64 00:10:13.106 00:10:13.106 Run status group 0 (all jobs): 00:10:13.106 READ: bw=52.9MiB/s (55.4MB/s), 52.9MiB/s-52.9MiB/s (55.4MB/s-55.4MB/s), io=1057MiB (1109MB), run=20005-20005msec 00:10:13.106 00:10:13.106 Disk stats (read/write): 00:10:13.106 sda: ios=267811/0, merge=1365/0, ticks=1226043/0, in_queue=1226042, util=99.60% 00:10:13.106 02:05:19 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- filesystem/filesystem.sh@116 -- # rm -rf /mnt/device/test 00:10:13.106 02:05:19 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- filesystem/filesystem.sh@117 -- # umount /mnt/device 00:10:13.106 ************************************ 00:10:13.106 END TEST iscsi_tgt_filesystem_xfs 00:10:13.106 ************************************ 00:10:13.106 00:10:13.106 real 0m39.399s 00:10:13.106 user 0m2.255s 00:10:13.106 sys 0m4.328s 00:10:13.106 02:05:19 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:13.106 02:05:19 iscsi_tgt.iscsi_tgt_filesystem.iscsi_tgt_filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:10:13.106 02:05:19 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:10:13.106 02:05:19 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@148 -- # rm -rf /mnt/device 00:10:13.106 Cleaning up iSCSI connection 00:10:13.106 02:05:19 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@150 -- # trap - SIGINT SIGTERM EXIT 00:10:13.106 02:05:19 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@152 -- # iscsicleanup 00:10:13.106 02:05:19 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@980 -- # echo 'Cleaning up iSCSI connection' 00:10:13.106 02:05:19 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@981 -- # iscsiadm -m node --logout 00:10:13.106 Logging out of session [sid: 4, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] 00:10:13.106 Logout of [sid: 4, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] successful. 00:10:13.106 02:05:19 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@982 -- # iscsiadm -m node -o delete 00:10:13.106 02:05:19 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@983 -- # rm -rf 00:10:13.106 INFO: Removing lvol bdev 00:10:13.106 02:05:20 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@153 -- # remove_backends 00:10:13.106 02:05:20 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@17 -- # echo 'INFO: Removing lvol bdev' 00:10:13.106 02:05:20 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@18 -- # rpc_cmd bdev_lvol_delete lvs_0/lbd_0 00:10:13.106 02:05:20 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:13.106 02:05:20 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:13.106 [2024-07-23 02:05:20.009538] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (a97c5fa0-c5f9-4d78-82e4-e50dd4500541) received event(SPDK_BDEV_EVENT_REMOVE) 00:10:13.106 INFO: Removing lvol stores 00:10:13.106 02:05:20 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:13.106 02:05:20 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@20 -- # echo 'INFO: Removing lvol stores' 00:10:13.106 02:05:20 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@21 -- # rpc_cmd bdev_lvol_delete_lvstore -l lvs_0 00:10:13.106 02:05:20 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:13.106 02:05:20 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:13.106 INFO: Removing NVMe 00:10:13.106 02:05:20 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:13.106 02:05:20 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@23 -- # echo 'INFO: Removing NVMe' 00:10:13.106 02:05:20 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@24 -- # rpc_cmd bdev_nvme_detach_controller Nvme0 00:10:13.106 02:05:20 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:13.106 02:05:20 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:13.106 02:05:20 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:13.106 02:05:20 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@26 -- # return 0 00:10:13.106 02:05:20 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@154 -- # killprocess 64640 00:10:13.106 02:05:20 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@948 -- # '[' -z 64640 ']' 00:10:13.106 02:05:20 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@952 -- # kill -0 64640 00:10:13.106 02:05:20 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@953 -- # uname 00:10:13.106 02:05:20 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:13.106 02:05:20 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 64640 00:10:13.106 killing process with pid 64640 00:10:13.106 02:05:20 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:13.106 02:05:20 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:13.106 02:05:20 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@966 -- # echo 'killing process with pid 64640' 00:10:13.106 02:05:20 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@967 -- # kill 64640 00:10:13.106 02:05:20 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@972 -- # wait 64640 00:10:13.365 02:05:21 iscsi_tgt.iscsi_tgt_filesystem -- filesystem/filesystem.sh@155 -- # iscsitestfini 00:10:13.365 02:05:21 iscsi_tgt.iscsi_tgt_filesystem -- iscsi_tgt/common.sh@131 -- # '[' '' == iso ']' 00:10:13.365 ************************************ 00:10:13.365 END TEST iscsi_tgt_filesystem 00:10:13.365 ************************************ 00:10:13.365 00:10:13.365 real 2m6.573s 00:10:13.365 user 8m1.355s 00:10:13.365 sys 0m34.551s 00:10:13.365 02:05:21 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:13.365 02:05:21 iscsi_tgt.iscsi_tgt_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:13.365 02:05:21 iscsi_tgt -- common/autotest_common.sh@1142 -- # return 0 00:10:13.365 02:05:21 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@32 -- # run_test chap_during_discovery /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/chap/chap_discovery.sh 00:10:13.365 02:05:21 iscsi_tgt -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:13.365 02:05:21 iscsi_tgt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:13.365 02:05:21 iscsi_tgt -- common/autotest_common.sh@10 -- # set +x 00:10:13.365 ************************************ 00:10:13.365 START TEST chap_during_discovery 00:10:13.365 ************************************ 00:10:13.365 02:05:21 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/chap/chap_discovery.sh 00:10:13.365 * Looking for test storage... 00:10:13.365 * Found test storage at /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/chap 00:10:13.365 02:05:22 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:10:13.365 02:05:22 iscsi_tgt.chap_during_discovery -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:10:13.365 02:05:22 iscsi_tgt.chap_during_discovery -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:10:13.365 02:05:22 iscsi_tgt.chap_during_discovery -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:10:13.365 02:05:22 iscsi_tgt.chap_during_discovery -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:10:13.365 02:05:22 iscsi_tgt.chap_during_discovery -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:10:13.365 02:05:22 iscsi_tgt.chap_during_discovery -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:10:13.365 02:05:22 iscsi_tgt.chap_during_discovery -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:10:13.365 02:05:22 iscsi_tgt.chap_during_discovery -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:10:13.365 02:05:22 iscsi_tgt.chap_during_discovery -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:10:13.365 02:05:22 iscsi_tgt.chap_during_discovery -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:10:13.365 02:05:22 iscsi_tgt.chap_during_discovery -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:10:13.365 02:05:22 iscsi_tgt.chap_during_discovery -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:10:13.365 02:05:22 iscsi_tgt.chap_during_discovery -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:10:13.365 02:05:22 iscsi_tgt.chap_during_discovery -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:10:13.365 02:05:22 iscsi_tgt.chap_during_discovery -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:10:13.365 02:05:22 iscsi_tgt.chap_during_discovery -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:10:13.365 02:05:22 iscsi_tgt.chap_during_discovery -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:10:13.365 02:05:22 iscsi_tgt.chap_during_discovery -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:10:13.365 02:05:22 iscsi_tgt.chap_during_discovery -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:10:13.365 02:05:22 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/chap/chap_common.sh 00:10:13.365 02:05:22 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@7 -- # TARGET_NAME=iqn.2016-06.io.spdk:disk1 00:10:13.365 02:05:22 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@8 -- # TARGET_ALIAS_NAME=disk1_alias 00:10:13.365 02:05:22 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@9 -- # MALLOC_BDEV_SIZE=64 00:10:13.365 02:05:22 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@10 -- # MALLOC_BLOCK_SIZE=512 00:10:13.365 02:05:22 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@13 -- # USER=chapo 00:10:13.365 02:05:22 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@14 -- # MUSER=mchapo 00:10:13.365 02:05:22 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@15 -- # PASS=123456789123 00:10:13.365 02:05:22 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@16 -- # MPASS=321978654321 00:10:13.365 02:05:22 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@19 -- # iscsitestinit 00:10:13.365 02:05:22 iscsi_tgt.chap_during_discovery -- iscsi_tgt/common.sh@109 -- # '[' '' == iso ']' 00:10:13.365 02:05:22 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@21 -- # set_up_iscsi_target 00:10:13.365 02:05:22 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@140 -- # timing_enter start_iscsi_tgt 00:10:13.365 02:05:22 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:13.365 02:05:22 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:13.365 02:05:22 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@142 -- # pid=66522 00:10:13.365 02:05:22 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@143 -- # echo 'iSCSI target launched. pid: 66522' 00:10:13.365 iSCSI target launched. pid: 66522 00:10:13.365 02:05:22 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@144 -- # trap 'killprocess $pid;exit 1' SIGINT SIGTERM EXIT 00:10:13.365 02:05:22 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@145 -- # waitforlisten 66522 00:10:13.365 02:05:22 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@829 -- # '[' -z 66522 ']' 00:10:13.365 02:05:22 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:13.365 02:05:22 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:13.365 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:13.365 02:05:22 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:13.365 02:05:22 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:13.365 02:05:22 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:13.365 02:05:22 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@141 -- # ip netns exec spdk_iscsi_ns /home/vagrant/spdk_repo/spdk/build/bin/iscsi_tgt -m 0x2 -p 1 -s 512 --wait-for-rpc 00:10:13.623 [2024-07-23 02:05:22.147958] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:10:13.623 [2024-07-23 02:05:22.148150] [ DPDK EAL parameters: iscsi --no-shconf -c 0x2 -m 512 --main-lcore=1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66522 ] 00:10:13.881 [2024-07-23 02:05:22.435756] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:13.881 [2024-07-23 02:05:22.619423] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:14.447 02:05:23 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:14.447 02:05:23 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@862 -- # return 0 00:10:14.447 02:05:23 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@146 -- # rpc_cmd iscsi_set_options -o 30 -a 4 00:10:14.447 02:05:23 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:14.447 02:05:23 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:14.447 02:05:23 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:14.447 02:05:23 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@147 -- # rpc_cmd framework_start_init 00:10:14.447 02:05:23 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:14.447 02:05:23 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:15.012 02:05:23 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:15.012 iscsi_tgt is listening. Running tests... 00:10:15.012 02:05:23 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@148 -- # echo 'iscsi_tgt is listening. Running tests...' 00:10:15.012 02:05:23 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@149 -- # timing_exit start_iscsi_tgt 00:10:15.012 02:05:23 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:15.012 02:05:23 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:15.012 02:05:23 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@151 -- # rpc_cmd iscsi_create_portal_group 1 10.0.0.1:3260 00:10:15.012 02:05:23 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:15.012 02:05:23 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:15.012 02:05:23 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:15.012 02:05:23 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@152 -- # rpc_cmd iscsi_create_initiator_group 2 ANY 10.0.0.2/32 00:10:15.012 02:05:23 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:15.012 02:05:23 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:15.012 02:05:23 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:15.012 02:05:23 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@153 -- # rpc_cmd bdev_malloc_create 64 512 00:10:15.012 02:05:23 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:15.012 02:05:23 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:15.012 Malloc0 00:10:15.012 02:05:23 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:15.012 02:05:23 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@154 -- # rpc_cmd iscsi_create_target_node iqn.2016-06.io.spdk:disk1 disk1_alias Malloc0:0 1:2 256 -d 00:10:15.012 02:05:23 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:15.012 02:05:23 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:15.012 02:05:23 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:15.012 02:05:23 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@155 -- # sleep 1 00:10:16.399 02:05:24 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@156 -- # trap 'killprocess $pid; iscsitestfini; exit 1' SIGINT SIGTERM EXIT 00:10:16.399 configuring target for bideerctional authentication 00:10:16.399 02:05:24 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@24 -- # echo 'configuring target for bideerctional authentication' 00:10:16.399 02:05:24 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@25 -- # config_chap_credentials_for_target -t 1 -u chapo -s 123456789123 -r mchapo -m 321978654321 -d -b 00:10:16.399 02:05:24 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@84 -- # parse_cmd_line -t 1 -u chapo -s 123456789123 -r mchapo -m 321978654321 -d -b 00:10:16.399 02:05:24 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@13 -- # OPTIND=0 00:10:16.399 02:05:24 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@14 -- # DURING_DISCOVERY=0 00:10:16.399 02:05:24 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@15 -- # DURING_LOGIN=0 00:10:16.399 02:05:24 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@16 -- # BI_DIRECT=0 00:10:16.399 02:05:24 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@17 -- # CHAP_USER=chapo 00:10:16.399 02:05:24 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@18 -- # CHAP_PASS=123456789123 00:10:16.399 02:05:24 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@19 -- # CHAP_MUSER= 00:10:16.399 02:05:24 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@20 -- # CHAP_MUSER= 00:10:16.399 02:05:24 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@21 -- # AUTH_GROUP_ID=1 00:10:16.399 02:05:24 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:10:16.399 02:05:24 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@24 -- # case ${opt} in 00:10:16.399 02:05:24 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@26 -- # AUTH_GROUP_ID=1 00:10:16.399 02:05:24 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:10:16.399 02:05:24 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@24 -- # case ${opt} in 00:10:16.399 02:05:24 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@29 -- # CHAP_USER=chapo 00:10:16.399 02:05:24 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:10:16.399 02:05:24 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@24 -- # case ${opt} in 00:10:16.399 02:05:24 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@32 -- # CHAP_PASS=123456789123 00:10:16.399 02:05:24 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:10:16.399 02:05:24 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@24 -- # case ${opt} in 00:10:16.399 02:05:24 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@35 -- # CHAP_MUSER=mchapo 00:10:16.399 02:05:24 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:10:16.399 02:05:24 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@24 -- # case ${opt} in 00:10:16.399 02:05:24 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@38 -- # CHAP_MPASS=321978654321 00:10:16.399 02:05:24 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:10:16.399 02:05:24 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@24 -- # case ${opt} in 00:10:16.399 02:05:24 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@41 -- # DURING_DISCOVERY=1 00:10:16.399 02:05:24 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:10:16.399 02:05:24 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@24 -- # case ${opt} in 00:10:16.399 02:05:24 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@47 -- # BI_DIRECT=1 00:10:16.399 02:05:24 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:10:16.399 02:05:24 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@86 -- # rpc_cmd iscsi_create_auth_group 1 00:10:16.399 02:05:24 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:16.399 02:05:24 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:16.399 02:05:24 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:16.399 02:05:24 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@88 -- # '[' -z mchapo ']' 00:10:16.399 02:05:24 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@88 -- # '[' -z 321978654321 ']' 00:10:16.399 02:05:24 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@91 -- # rpc_cmd iscsi_auth_group_add_secret -u chapo -s 123456789123 -m mchapo -r 321978654321 1 00:10:16.399 02:05:24 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:16.399 02:05:24 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:16.399 02:05:24 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:16.399 02:05:24 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@95 -- # '[' 0 -eq 1 ']' 00:10:16.399 02:05:24 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@102 -- # '[' 1 -eq 1 ']' 00:10:16.399 02:05:24 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@103 -- # '[' 1 -eq 1 ']' 00:10:16.399 02:05:24 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@104 -- # rpc_cmd iscsi_set_discovery_auth -r -m -g 1 00:10:16.399 02:05:24 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:16.399 02:05:24 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:16.399 02:05:24 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:16.399 executing discovery without adding credential to initiator - we expect failure 00:10:16.399 02:05:24 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@26 -- # echo 'executing discovery without adding credential to initiator - we expect failure' 00:10:16.399 02:05:24 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@27 -- # rc=0 00:10:16.399 02:05:24 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@28 -- # iscsiadm -m discovery -t sendtargets -p 10.0.0.1:3260 00:10:16.399 iscsiadm: Login failed to authenticate with target 00:10:16.399 iscsiadm: discovery login to 10.0.0.1 rejected: initiator failed authorization 00:10:16.399 iscsiadm: Could not perform SendTargets discovery: iSCSI login failed due to authorization failure 00:10:16.399 02:05:24 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@28 -- # rc=24 00:10:16.399 02:05:24 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@29 -- # '[' 24 -eq 0 ']' 00:10:16.399 configuring initiator for bideerctional authentication 00:10:16.399 02:05:24 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@35 -- # echo 'configuring initiator for bideerctional authentication' 00:10:16.399 02:05:24 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@36 -- # config_chap_credentials_for_initiator -t 1 -u chapo -s 123456789123 -r mchapo -m 321978654321 -d -b 00:10:16.399 02:05:24 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@113 -- # parse_cmd_line -t 1 -u chapo -s 123456789123 -r mchapo -m 321978654321 -d -b 00:10:16.399 02:05:24 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@13 -- # OPTIND=0 00:10:16.399 02:05:24 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@14 -- # DURING_DISCOVERY=0 00:10:16.399 02:05:24 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@15 -- # DURING_LOGIN=0 00:10:16.399 02:05:24 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@16 -- # BI_DIRECT=0 00:10:16.399 02:05:24 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@17 -- # CHAP_USER=chapo 00:10:16.399 02:05:24 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@18 -- # CHAP_PASS=123456789123 00:10:16.399 02:05:24 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@19 -- # CHAP_MUSER= 00:10:16.399 02:05:24 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@20 -- # CHAP_MUSER= 00:10:16.399 02:05:24 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@21 -- # AUTH_GROUP_ID=1 00:10:16.399 02:05:24 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:10:16.399 02:05:24 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@24 -- # case ${opt} in 00:10:16.399 02:05:24 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@26 -- # AUTH_GROUP_ID=1 00:10:16.399 02:05:24 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:10:16.399 02:05:24 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@24 -- # case ${opt} in 00:10:16.399 02:05:24 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@29 -- # CHAP_USER=chapo 00:10:16.399 02:05:24 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:10:16.399 02:05:24 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@24 -- # case ${opt} in 00:10:16.399 02:05:24 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@32 -- # CHAP_PASS=123456789123 00:10:16.399 02:05:24 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:10:16.399 02:05:24 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@24 -- # case ${opt} in 00:10:16.399 02:05:24 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@35 -- # CHAP_MUSER=mchapo 00:10:16.399 02:05:24 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:10:16.399 02:05:24 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@24 -- # case ${opt} in 00:10:16.399 02:05:24 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@38 -- # CHAP_MPASS=321978654321 00:10:16.399 02:05:24 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:10:16.399 02:05:24 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@24 -- # case ${opt} in 00:10:16.399 02:05:24 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@41 -- # DURING_DISCOVERY=1 00:10:16.399 02:05:24 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:10:16.399 02:05:24 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@24 -- # case ${opt} in 00:10:16.399 02:05:24 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@47 -- # BI_DIRECT=1 00:10:16.399 02:05:24 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:10:16.400 02:05:24 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@114 -- # default_initiator_chap_credentials 00:10:16.400 02:05:24 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@64 -- # iscsiadm -m node --logout 00:10:16.400 iscsiadm: No matching sessions found 00:10:16.400 02:05:24 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@64 -- # true 00:10:16.400 02:05:24 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@65 -- # iscsiadm -m node -o delete 00:10:16.400 iscsiadm: No records found 00:10:16.400 02:05:24 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@65 -- # true 00:10:16.400 02:05:24 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@67 -- # sed -i 's/^node.session.auth.authmethod = CHAP/#node.session.auth.authmethod = CHAP/' /etc/iscsi/iscsid.conf 00:10:16.400 02:05:24 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@68 -- # sed -i 's/^node.session.auth.username =.*/#node.session.auth.username = username/' /etc/iscsi/iscsid.conf 00:10:16.400 02:05:24 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@69 -- # sed -i 's/^node.session.auth.password =.*/#node.session.auth.password = password/' /etc/iscsi/iscsid.conf 00:10:16.400 02:05:24 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@70 -- # sed -i 's/^node.session.auth.username_in =.*/#node.session.auth.username_in = username_in/' /etc/iscsi/iscsid.conf 00:10:16.400 02:05:24 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@71 -- # sed -i 's/^node.session.auth.password_in =.*/#node.session.auth.password_in = password_in/' /etc/iscsi/iscsid.conf 00:10:16.400 02:05:24 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@73 -- # sed -i 's/^discovery.sendtargets.auth.authmethod = CHAP/#discovery.sendtargets.auth.authmethod = CHAP/' /etc/iscsi/iscsid.conf 00:10:16.400 02:05:24 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@74 -- # sed -i 's/^discovery.sendtargets.auth.username =.*/#discovery.sendtargets.auth.username = username/' /etc/iscsi/iscsid.conf 00:10:16.400 02:05:24 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@75 -- # sed -i 's/^discovery.sendtargets.auth.password =.*/#discovery.sendtargets.auth.password = password/' /etc/iscsi/iscsid.conf 00:10:16.400 02:05:24 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@76 -- # sed -i 's/^discovery.sendtargets.auth.username_in =.*/#discovery.sendtargets.auth.username_in = username_in/' /etc/iscsi/iscsid.conf 00:10:16.400 02:05:24 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@77 -- # sed -i 's/^discovery.sendtargets.auth.password_in =.*/#discovery.sendtargets.auth.password_in = password_in/' /etc/iscsi/iscsid.conf 00:10:16.400 02:05:24 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@78 -- # restart_iscsid 00:10:16.400 02:05:24 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@58 -- # sleep 3 00:10:19.706 02:05:27 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@59 -- # systemctl restart iscsid 00:10:19.706 02:05:27 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@60 -- # sleep 1 00:10:20.272 02:05:28 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@79 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:10:20.272 02:05:28 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@116 -- # '[' 0 -eq 1 ']' 00:10:20.272 02:05:28 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@126 -- # '[' 1 -eq 1 ']' 00:10:20.272 02:05:28 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@127 -- # sed -i 's/#discovery.sendtargets.auth.authmethod = CHAP/discovery.sendtargets.auth.authmethod = CHAP/' /etc/iscsi/iscsid.conf 00:10:20.272 02:05:28 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@128 -- # sed -i 's/#discovery.sendtargets.auth.username =.*/discovery.sendtargets.auth.username = chapo/' /etc/iscsi/iscsid.conf 00:10:20.272 02:05:28 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@129 -- # sed -i 's/#discovery.sendtargets.auth.password =.*/discovery.sendtargets.auth.password = 123456789123/' /etc/iscsi/iscsid.conf 00:10:20.272 02:05:28 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@130 -- # '[' 1 -eq 1 ']' 00:10:20.272 02:05:28 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@130 -- # '[' -n 321978654321 ']' 00:10:20.272 02:05:28 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@130 -- # '[' -n mchapo ']' 00:10:20.272 02:05:28 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@131 -- # sed -i 's/#discovery.sendtargets.auth.username_in =.*/discovery.sendtargets.auth.username_in = mchapo/' /etc/iscsi/iscsid.conf 00:10:20.272 02:05:28 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@132 -- # sed -i 's/#discovery.sendtargets.auth.password_in =.*/discovery.sendtargets.auth.password_in = 321978654321/' /etc/iscsi/iscsid.conf 00:10:20.272 02:05:28 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@135 -- # restart_iscsid 00:10:20.272 02:05:28 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@58 -- # sleep 3 00:10:23.553 02:05:31 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@59 -- # systemctl restart iscsid 00:10:23.553 02:05:32 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@60 -- # sleep 1 00:10:24.488 02:05:33 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@136 -- # trap 'trap - ERR; default_initiator_chap_credentials; print_backtrace >&2' ERR 00:10:24.488 executing discovery with adding credential to initiator 00:10:24.488 02:05:33 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@37 -- # echo 'executing discovery with adding credential to initiator' 00:10:24.488 02:05:33 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@38 -- # rc=0 00:10:24.488 02:05:33 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@39 -- # iscsiadm -m discovery -t sendtargets -p 10.0.0.1:3260 00:10:24.488 10.0.0.1:3260,1 iqn.2016-06.io.spdk:disk1 00:10:24.488 02:05:33 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@40 -- # '[' 0 -ne 0 ']' 00:10:24.488 DONE 00:10:24.488 02:05:33 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@44 -- # echo DONE 00:10:24.488 02:05:33 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@45 -- # default_initiator_chap_credentials 00:10:24.488 02:05:33 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@64 -- # iscsiadm -m node --logout 00:10:24.488 iscsiadm: No matching sessions found 00:10:24.488 02:05:33 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@64 -- # true 00:10:24.488 02:05:33 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@65 -- # iscsiadm -m node -o delete 00:10:24.488 02:05:33 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@67 -- # sed -i 's/^node.session.auth.authmethod = CHAP/#node.session.auth.authmethod = CHAP/' /etc/iscsi/iscsid.conf 00:10:24.488 02:05:33 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@68 -- # sed -i 's/^node.session.auth.username =.*/#node.session.auth.username = username/' /etc/iscsi/iscsid.conf 00:10:24.488 02:05:33 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@69 -- # sed -i 's/^node.session.auth.password =.*/#node.session.auth.password = password/' /etc/iscsi/iscsid.conf 00:10:24.488 02:05:33 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@70 -- # sed -i 's/^node.session.auth.username_in =.*/#node.session.auth.username_in = username_in/' /etc/iscsi/iscsid.conf 00:10:24.488 02:05:33 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@71 -- # sed -i 's/^node.session.auth.password_in =.*/#node.session.auth.password_in = password_in/' /etc/iscsi/iscsid.conf 00:10:24.488 02:05:33 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@73 -- # sed -i 's/^discovery.sendtargets.auth.authmethod = CHAP/#discovery.sendtargets.auth.authmethod = CHAP/' /etc/iscsi/iscsid.conf 00:10:24.488 02:05:33 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@74 -- # sed -i 's/^discovery.sendtargets.auth.username =.*/#discovery.sendtargets.auth.username = username/' /etc/iscsi/iscsid.conf 00:10:24.488 02:05:33 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@75 -- # sed -i 's/^discovery.sendtargets.auth.password =.*/#discovery.sendtargets.auth.password = password/' /etc/iscsi/iscsid.conf 00:10:24.488 02:05:33 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@76 -- # sed -i 's/^discovery.sendtargets.auth.username_in =.*/#discovery.sendtargets.auth.username_in = username_in/' /etc/iscsi/iscsid.conf 00:10:24.488 02:05:33 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@77 -- # sed -i 's/^discovery.sendtargets.auth.password_in =.*/#discovery.sendtargets.auth.password_in = password_in/' /etc/iscsi/iscsid.conf 00:10:24.488 02:05:33 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@78 -- # restart_iscsid 00:10:24.488 02:05:33 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@58 -- # sleep 3 00:10:27.770 02:05:36 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@59 -- # systemctl restart iscsid 00:10:27.770 02:05:36 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@60 -- # sleep 1 00:10:28.705 02:05:37 iscsi_tgt.chap_during_discovery -- chap/chap_common.sh@79 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:10:28.705 02:05:37 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@47 -- # trap - SIGINT SIGTERM EXIT 00:10:28.705 02:05:37 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@49 -- # killprocess 66522 00:10:28.705 02:05:37 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@948 -- # '[' -z 66522 ']' 00:10:28.705 02:05:37 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@952 -- # kill -0 66522 00:10:28.705 02:05:37 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@953 -- # uname 00:10:28.705 02:05:37 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:28.705 02:05:37 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 66522 00:10:28.705 killing process with pid 66522 00:10:28.705 02:05:37 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:10:28.705 02:05:37 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:10:28.705 02:05:37 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 66522' 00:10:28.705 02:05:37 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@967 -- # kill 66522 00:10:28.705 02:05:37 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@972 -- # wait 66522 00:10:30.608 02:05:39 iscsi_tgt.chap_during_discovery -- chap/chap_discovery.sh@51 -- # iscsitestfini 00:10:30.608 02:05:39 iscsi_tgt.chap_during_discovery -- iscsi_tgt/common.sh@131 -- # '[' '' == iso ']' 00:10:30.608 ************************************ 00:10:30.608 END TEST chap_during_discovery 00:10:30.608 ************************************ 00:10:30.608 00:10:30.608 real 0m17.203s 00:10:30.608 user 0m17.069s 00:10:30.608 sys 0m0.787s 00:10:30.608 02:05:39 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:30.608 02:05:39 iscsi_tgt.chap_during_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:30.608 02:05:39 iscsi_tgt -- common/autotest_common.sh@1142 -- # return 0 00:10:30.608 02:05:39 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@33 -- # run_test chap_mutual_auth /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/chap/chap_mutual_not_set.sh 00:10:30.608 02:05:39 iscsi_tgt -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:30.608 02:05:39 iscsi_tgt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:30.608 02:05:39 iscsi_tgt -- common/autotest_common.sh@10 -- # set +x 00:10:30.608 ************************************ 00:10:30.608 START TEST chap_mutual_auth 00:10:30.608 ************************************ 00:10:30.608 02:05:39 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/chap/chap_mutual_not_set.sh 00:10:30.608 * Looking for test storage... 00:10:30.608 * Found test storage at /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/chap 00:10:30.608 02:05:39 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:10:30.608 02:05:39 iscsi_tgt.chap_mutual_auth -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:10:30.608 02:05:39 iscsi_tgt.chap_mutual_auth -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:10:30.608 02:05:39 iscsi_tgt.chap_mutual_auth -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:10:30.608 02:05:39 iscsi_tgt.chap_mutual_auth -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:10:30.608 02:05:39 iscsi_tgt.chap_mutual_auth -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:10:30.608 02:05:39 iscsi_tgt.chap_mutual_auth -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:10:30.608 02:05:39 iscsi_tgt.chap_mutual_auth -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:10:30.608 02:05:39 iscsi_tgt.chap_mutual_auth -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:10:30.608 02:05:39 iscsi_tgt.chap_mutual_auth -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:10:30.608 02:05:39 iscsi_tgt.chap_mutual_auth -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:10:30.608 02:05:39 iscsi_tgt.chap_mutual_auth -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:10:30.608 02:05:39 iscsi_tgt.chap_mutual_auth -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:10:30.608 02:05:39 iscsi_tgt.chap_mutual_auth -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:10:30.608 02:05:39 iscsi_tgt.chap_mutual_auth -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:10:30.608 02:05:39 iscsi_tgt.chap_mutual_auth -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:10:30.608 02:05:39 iscsi_tgt.chap_mutual_auth -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:10:30.608 02:05:39 iscsi_tgt.chap_mutual_auth -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:10:30.608 02:05:39 iscsi_tgt.chap_mutual_auth -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:10:30.608 02:05:39 iscsi_tgt.chap_mutual_auth -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:10:30.608 02:05:39 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/chap/chap_common.sh 00:10:30.608 02:05:39 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@7 -- # TARGET_NAME=iqn.2016-06.io.spdk:disk1 00:10:30.608 02:05:39 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@8 -- # TARGET_ALIAS_NAME=disk1_alias 00:10:30.608 02:05:39 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@9 -- # MALLOC_BDEV_SIZE=64 00:10:30.608 02:05:39 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@10 -- # MALLOC_BLOCK_SIZE=512 00:10:30.608 02:05:39 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@13 -- # USER=chapo 00:10:30.609 02:05:39 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@14 -- # MUSER=mchapo 00:10:30.609 02:05:39 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@15 -- # PASS=123456789123 00:10:30.609 02:05:39 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@16 -- # MPASS=321978654321 00:10:30.609 02:05:39 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@19 -- # iscsitestinit 00:10:30.609 02:05:39 iscsi_tgt.chap_mutual_auth -- iscsi_tgt/common.sh@109 -- # '[' '' == iso ']' 00:10:30.609 02:05:39 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@21 -- # set_up_iscsi_target 00:10:30.609 02:05:39 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@140 -- # timing_enter start_iscsi_tgt 00:10:30.609 02:05:39 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:30.609 02:05:39 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@10 -- # set +x 00:10:30.609 02:05:39 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@142 -- # pid=66817 00:10:30.609 iSCSI target launched. pid: 66817 00:10:30.609 02:05:39 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@143 -- # echo 'iSCSI target launched. pid: 66817' 00:10:30.609 02:05:39 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@141 -- # ip netns exec spdk_iscsi_ns /home/vagrant/spdk_repo/spdk/build/bin/iscsi_tgt -m 0x2 -p 1 -s 512 --wait-for-rpc 00:10:30.609 02:05:39 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@144 -- # trap 'killprocess $pid;exit 1' SIGINT SIGTERM EXIT 00:10:30.609 02:05:39 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@145 -- # waitforlisten 66817 00:10:30.609 02:05:39 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@829 -- # '[' -z 66817 ']' 00:10:30.609 02:05:39 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:30.609 02:05:39 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:30.609 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:30.609 02:05:39 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:30.609 02:05:39 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:30.609 02:05:39 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@10 -- # set +x 00:10:30.868 [2024-07-23 02:05:39.456574] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:10:30.868 [2024-07-23 02:05:39.456811] [ DPDK EAL parameters: iscsi --no-shconf -c 0x2 -m 512 --main-lcore=1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66817 ] 00:10:31.127 [2024-07-23 02:05:39.833047] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:31.386 [2024-07-23 02:05:40.055746] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:31.645 02:05:40 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:31.645 02:05:40 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@862 -- # return 0 00:10:31.645 02:05:40 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@146 -- # rpc_cmd iscsi_set_options -o 30 -a 4 00:10:31.645 02:05:40 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:31.645 02:05:40 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@10 -- # set +x 00:10:31.645 02:05:40 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:31.645 02:05:40 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@147 -- # rpc_cmd framework_start_init 00:10:31.645 02:05:40 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:31.645 02:05:40 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@10 -- # set +x 00:10:32.212 02:05:40 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:32.212 iscsi_tgt is listening. Running tests... 00:10:32.212 02:05:40 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@148 -- # echo 'iscsi_tgt is listening. Running tests...' 00:10:32.212 02:05:40 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@149 -- # timing_exit start_iscsi_tgt 00:10:32.212 02:05:40 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:32.212 02:05:40 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@10 -- # set +x 00:10:32.212 02:05:40 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@151 -- # rpc_cmd iscsi_create_portal_group 1 10.0.0.1:3260 00:10:32.212 02:05:40 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:32.212 02:05:40 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@10 -- # set +x 00:10:32.212 02:05:40 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:32.212 02:05:40 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@152 -- # rpc_cmd iscsi_create_initiator_group 2 ANY 10.0.0.2/32 00:10:32.212 02:05:40 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:32.212 02:05:40 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@10 -- # set +x 00:10:32.212 02:05:40 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:32.212 02:05:40 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@153 -- # rpc_cmd bdev_malloc_create 64 512 00:10:32.212 02:05:40 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:32.212 02:05:40 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@10 -- # set +x 00:10:32.212 Malloc0 00:10:32.212 02:05:40 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:32.212 02:05:40 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@154 -- # rpc_cmd iscsi_create_target_node iqn.2016-06.io.spdk:disk1 disk1_alias Malloc0:0 1:2 256 -d 00:10:32.212 02:05:40 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:32.212 02:05:40 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@10 -- # set +x 00:10:32.212 02:05:40 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:32.212 02:05:40 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@155 -- # sleep 1 00:10:33.588 02:05:41 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@156 -- # trap 'killprocess $pid; iscsitestfini; exit 1' SIGINT SIGTERM EXIT 00:10:33.588 configuring target for authentication 00:10:33.588 02:05:41 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@24 -- # echo 'configuring target for authentication' 00:10:33.588 02:05:41 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@25 -- # config_chap_credentials_for_target -t 1 -u chapo -s 123456789123 -r mchapo -m 321978654321 -d -l 00:10:33.588 02:05:41 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@84 -- # parse_cmd_line -t 1 -u chapo -s 123456789123 -r mchapo -m 321978654321 -d -l 00:10:33.588 02:05:41 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@13 -- # OPTIND=0 00:10:33.588 02:05:41 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@14 -- # DURING_DISCOVERY=0 00:10:33.588 02:05:41 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@15 -- # DURING_LOGIN=0 00:10:33.588 02:05:41 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@16 -- # BI_DIRECT=0 00:10:33.588 02:05:41 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@17 -- # CHAP_USER=chapo 00:10:33.588 02:05:41 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@18 -- # CHAP_PASS=123456789123 00:10:33.588 02:05:41 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@19 -- # CHAP_MUSER= 00:10:33.588 02:05:41 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@20 -- # CHAP_MUSER= 00:10:33.588 02:05:41 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@21 -- # AUTH_GROUP_ID=1 00:10:33.588 02:05:41 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:10:33.588 02:05:41 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@24 -- # case ${opt} in 00:10:33.588 02:05:41 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@26 -- # AUTH_GROUP_ID=1 00:10:33.588 02:05:41 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:10:33.588 02:05:41 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@24 -- # case ${opt} in 00:10:33.588 02:05:41 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@29 -- # CHAP_USER=chapo 00:10:33.588 02:05:41 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:10:33.588 02:05:41 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@24 -- # case ${opt} in 00:10:33.588 02:05:41 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@32 -- # CHAP_PASS=123456789123 00:10:33.588 02:05:41 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:10:33.588 02:05:41 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@24 -- # case ${opt} in 00:10:33.588 02:05:41 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@35 -- # CHAP_MUSER=mchapo 00:10:33.588 02:05:41 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:10:33.588 02:05:41 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@24 -- # case ${opt} in 00:10:33.588 02:05:41 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@38 -- # CHAP_MPASS=321978654321 00:10:33.588 02:05:41 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:10:33.588 02:05:41 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@24 -- # case ${opt} in 00:10:33.588 02:05:41 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@41 -- # DURING_DISCOVERY=1 00:10:33.588 02:05:41 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:10:33.588 02:05:41 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@24 -- # case ${opt} in 00:10:33.588 02:05:41 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@44 -- # DURING_LOGIN=1 00:10:33.588 02:05:41 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:10:33.588 02:05:41 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@86 -- # rpc_cmd iscsi_create_auth_group 1 00:10:33.588 02:05:41 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:33.588 02:05:41 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@10 -- # set +x 00:10:33.588 02:05:41 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:33.588 02:05:41 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@88 -- # '[' -z mchapo ']' 00:10:33.588 02:05:41 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@88 -- # '[' -z 321978654321 ']' 00:10:33.588 02:05:41 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@91 -- # rpc_cmd iscsi_auth_group_add_secret -u chapo -s 123456789123 -m mchapo -r 321978654321 1 00:10:33.588 02:05:41 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:33.588 02:05:41 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@10 -- # set +x 00:10:33.588 02:05:41 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:33.588 02:05:41 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@95 -- # '[' 1 -eq 1 ']' 00:10:33.588 02:05:41 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@96 -- # '[' 0 -eq 1 ']' 00:10:33.588 02:05:41 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@99 -- # rpc_cmd iscsi_target_node_set_auth -g 1 -r iqn.2016-06.io.spdk:disk1 00:10:33.588 02:05:41 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:33.588 02:05:41 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@10 -- # set +x 00:10:33.588 02:05:41 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:33.588 02:05:41 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@102 -- # '[' 1 -eq 1 ']' 00:10:33.588 02:05:41 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@103 -- # '[' 0 -eq 1 ']' 00:10:33.588 02:05:41 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@106 -- # rpc_cmd iscsi_set_discovery_auth -r -g 1 00:10:33.588 02:05:41 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:33.588 02:05:41 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@10 -- # set +x 00:10:33.588 02:05:41 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:33.588 executing discovery without adding credential to initiator - we expect failure 00:10:33.588 02:05:41 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@26 -- # echo 'executing discovery without adding credential to initiator - we expect failure' 00:10:33.588 configuring initiator with biderectional authentication 00:10:33.588 02:05:41 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@28 -- # echo 'configuring initiator with biderectional authentication' 00:10:33.588 02:05:41 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@29 -- # config_chap_credentials_for_initiator -t 1 -u chapo -s 123456789123 -r mchapo -m 321978654321 -d -l -b 00:10:33.588 02:05:41 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@113 -- # parse_cmd_line -t 1 -u chapo -s 123456789123 -r mchapo -m 321978654321 -d -l -b 00:10:33.588 02:05:41 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@13 -- # OPTIND=0 00:10:33.588 02:05:41 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@14 -- # DURING_DISCOVERY=0 00:10:33.588 02:05:41 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@15 -- # DURING_LOGIN=0 00:10:33.588 02:05:41 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@16 -- # BI_DIRECT=0 00:10:33.588 02:05:41 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@17 -- # CHAP_USER=chapo 00:10:33.588 02:05:41 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@18 -- # CHAP_PASS=123456789123 00:10:33.588 02:05:41 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@19 -- # CHAP_MUSER= 00:10:33.588 02:05:41 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@20 -- # CHAP_MUSER= 00:10:33.588 02:05:41 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@21 -- # AUTH_GROUP_ID=1 00:10:33.588 02:05:41 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:10:33.588 02:05:41 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@24 -- # case ${opt} in 00:10:33.588 02:05:41 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@26 -- # AUTH_GROUP_ID=1 00:10:33.588 02:05:41 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:10:33.588 02:05:41 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@24 -- # case ${opt} in 00:10:33.588 02:05:41 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@29 -- # CHAP_USER=chapo 00:10:33.588 02:05:41 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:10:33.588 02:05:41 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@24 -- # case ${opt} in 00:10:33.588 02:05:41 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@32 -- # CHAP_PASS=123456789123 00:10:33.588 02:05:41 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:10:33.588 02:05:41 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@24 -- # case ${opt} in 00:10:33.588 02:05:41 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@35 -- # CHAP_MUSER=mchapo 00:10:33.588 02:05:41 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:10:33.588 02:05:41 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@24 -- # case ${opt} in 00:10:33.588 02:05:41 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@38 -- # CHAP_MPASS=321978654321 00:10:33.588 02:05:41 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:10:33.588 02:05:41 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@24 -- # case ${opt} in 00:10:33.588 02:05:41 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@41 -- # DURING_DISCOVERY=1 00:10:33.588 02:05:41 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:10:33.588 02:05:41 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@24 -- # case ${opt} in 00:10:33.588 02:05:41 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@44 -- # DURING_LOGIN=1 00:10:33.588 02:05:41 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:10:33.588 02:05:41 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@24 -- # case ${opt} in 00:10:33.588 02:05:41 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@47 -- # BI_DIRECT=1 00:10:33.588 02:05:41 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:10:33.588 02:05:41 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@114 -- # default_initiator_chap_credentials 00:10:33.588 02:05:41 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@64 -- # iscsiadm -m node --logout 00:10:33.588 iscsiadm: No matching sessions found 00:10:33.588 02:05:42 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@64 -- # true 00:10:33.588 02:05:42 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@65 -- # iscsiadm -m node -o delete 00:10:33.588 iscsiadm: No records found 00:10:33.588 02:05:42 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@65 -- # true 00:10:33.588 02:05:42 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@67 -- # sed -i 's/^node.session.auth.authmethod = CHAP/#node.session.auth.authmethod = CHAP/' /etc/iscsi/iscsid.conf 00:10:33.588 02:05:42 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@68 -- # sed -i 's/^node.session.auth.username =.*/#node.session.auth.username = username/' /etc/iscsi/iscsid.conf 00:10:33.588 02:05:42 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@69 -- # sed -i 's/^node.session.auth.password =.*/#node.session.auth.password = password/' /etc/iscsi/iscsid.conf 00:10:33.588 02:05:42 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@70 -- # sed -i 's/^node.session.auth.username_in =.*/#node.session.auth.username_in = username_in/' /etc/iscsi/iscsid.conf 00:10:33.588 02:05:42 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@71 -- # sed -i 's/^node.session.auth.password_in =.*/#node.session.auth.password_in = password_in/' /etc/iscsi/iscsid.conf 00:10:33.588 02:05:42 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@73 -- # sed -i 's/^discovery.sendtargets.auth.authmethod = CHAP/#discovery.sendtargets.auth.authmethod = CHAP/' /etc/iscsi/iscsid.conf 00:10:33.588 02:05:42 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@74 -- # sed -i 's/^discovery.sendtargets.auth.username =.*/#discovery.sendtargets.auth.username = username/' /etc/iscsi/iscsid.conf 00:10:33.588 02:05:42 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@75 -- # sed -i 's/^discovery.sendtargets.auth.password =.*/#discovery.sendtargets.auth.password = password/' /etc/iscsi/iscsid.conf 00:10:33.588 02:05:42 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@76 -- # sed -i 's/^discovery.sendtargets.auth.username_in =.*/#discovery.sendtargets.auth.username_in = username_in/' /etc/iscsi/iscsid.conf 00:10:33.588 02:05:42 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@77 -- # sed -i 's/^discovery.sendtargets.auth.password_in =.*/#discovery.sendtargets.auth.password_in = password_in/' /etc/iscsi/iscsid.conf 00:10:33.588 02:05:42 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@78 -- # restart_iscsid 00:10:33.588 02:05:42 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@58 -- # sleep 3 00:10:36.871 02:05:45 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@59 -- # systemctl restart iscsid 00:10:36.871 02:05:45 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@60 -- # sleep 1 00:10:37.438 02:05:46 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@79 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:10:37.438 02:05:46 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@116 -- # '[' 1 -eq 1 ']' 00:10:37.438 02:05:46 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@117 -- # sed -i 's/#node.session.auth.authmethod = CHAP/node.session.auth.authmethod = CHAP/' /etc/iscsi/iscsid.conf 00:10:37.438 02:05:46 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@118 -- # sed -i 's/#node.session.auth.username =.*/node.session.auth.username = chapo/' /etc/iscsi/iscsid.conf 00:10:37.438 02:05:46 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@119 -- # sed -i 's/#node.session.auth.password =.*/node.session.auth.password = 123456789123/' /etc/iscsi/iscsid.conf 00:10:37.438 02:05:46 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@120 -- # '[' 1 -eq 1 ']' 00:10:37.438 02:05:46 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@120 -- # '[' -n 321978654321 ']' 00:10:37.438 02:05:46 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@120 -- # '[' -n mchapo ']' 00:10:37.438 02:05:46 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@121 -- # sed -i 's/#node.session.auth.username_in =.*/node.session.auth.username_in = mchapo/' /etc/iscsi/iscsid.conf 00:10:37.438 02:05:46 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@122 -- # sed -i 's/#node.session.auth.password_in =.*/node.session.auth.password_in = 321978654321/' /etc/iscsi/iscsid.conf 00:10:37.438 02:05:46 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@126 -- # '[' 1 -eq 1 ']' 00:10:37.438 02:05:46 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@127 -- # sed -i 's/#discovery.sendtargets.auth.authmethod = CHAP/discovery.sendtargets.auth.authmethod = CHAP/' /etc/iscsi/iscsid.conf 00:10:37.438 02:05:46 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@128 -- # sed -i 's/#discovery.sendtargets.auth.username =.*/discovery.sendtargets.auth.username = chapo/' /etc/iscsi/iscsid.conf 00:10:37.438 02:05:46 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@129 -- # sed -i 's/#discovery.sendtargets.auth.password =.*/discovery.sendtargets.auth.password = 123456789123/' /etc/iscsi/iscsid.conf 00:10:37.438 02:05:46 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@130 -- # '[' 1 -eq 1 ']' 00:10:37.438 02:05:46 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@130 -- # '[' -n 321978654321 ']' 00:10:37.438 02:05:46 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@130 -- # '[' -n mchapo ']' 00:10:37.438 02:05:46 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@131 -- # sed -i 's/#discovery.sendtargets.auth.username_in =.*/discovery.sendtargets.auth.username_in = mchapo/' /etc/iscsi/iscsid.conf 00:10:37.438 02:05:46 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@132 -- # sed -i 's/#discovery.sendtargets.auth.password_in =.*/discovery.sendtargets.auth.password_in = 321978654321/' /etc/iscsi/iscsid.conf 00:10:37.438 02:05:46 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@135 -- # restart_iscsid 00:10:37.438 02:05:46 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@58 -- # sleep 3 00:10:40.724 02:05:49 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@59 -- # systemctl restart iscsid 00:10:40.724 02:05:49 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@60 -- # sleep 1 00:10:41.660 02:05:50 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@136 -- # trap 'trap - ERR; default_initiator_chap_credentials; print_backtrace >&2' ERR 00:10:41.660 executing discovery - target should not be discovered since the -m option was not used 00:10:41.660 02:05:50 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@30 -- # echo 'executing discovery - target should not be discovered since the -m option was not used' 00:10:41.660 02:05:50 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@31 -- # rc=0 00:10:41.660 02:05:50 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@32 -- # iscsiadm -m discovery -t sendtargets -p 10.0.0.1:3260 00:10:41.660 [2024-07-23 02:05:50.259832] iscsi.c: 982:iscsi_auth_params: *ERROR*: Initiator wants to use mutual CHAP for security, but it's not enabled. 00:10:41.660 [2024-07-23 02:05:50.259932] iscsi.c:1957:iscsi_op_login_rsp_handle_csg_bit: *ERROR*: iscsi_auth_params() failed 00:10:41.660 iscsiadm: Login failed to authenticate with target 00:10:41.660 iscsiadm: discovery login to 10.0.0.1 rejected: initiator failed authorization 00:10:41.660 iscsiadm: Could not perform SendTargets discovery: iSCSI login failed due to authorization failure 00:10:41.660 02:05:50 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@32 -- # rc=24 00:10:41.660 02:05:50 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@33 -- # '[' 24 -eq 0 ']' 00:10:41.660 configuring target for authentication with the -m option 00:10:41.660 02:05:50 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@37 -- # echo 'configuring target for authentication with the -m option' 00:10:41.660 02:05:50 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@38 -- # config_chap_credentials_for_target -t 2 -u chapo -s 123456789123 -r mchapo -m 321978654321 -d -l -b 00:10:41.660 02:05:50 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@84 -- # parse_cmd_line -t 2 -u chapo -s 123456789123 -r mchapo -m 321978654321 -d -l -b 00:10:41.660 02:05:50 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@13 -- # OPTIND=0 00:10:41.660 02:05:50 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@14 -- # DURING_DISCOVERY=0 00:10:41.660 02:05:50 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@15 -- # DURING_LOGIN=0 00:10:41.660 02:05:50 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@16 -- # BI_DIRECT=0 00:10:41.660 02:05:50 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@17 -- # CHAP_USER=chapo 00:10:41.660 02:05:50 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@18 -- # CHAP_PASS=123456789123 00:10:41.660 02:05:50 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@19 -- # CHAP_MUSER= 00:10:41.660 02:05:50 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@20 -- # CHAP_MUSER= 00:10:41.660 02:05:50 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@21 -- # AUTH_GROUP_ID=1 00:10:41.660 02:05:50 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:10:41.660 02:05:50 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@24 -- # case ${opt} in 00:10:41.660 02:05:50 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@26 -- # AUTH_GROUP_ID=2 00:10:41.660 02:05:50 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:10:41.660 02:05:50 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@24 -- # case ${opt} in 00:10:41.660 02:05:50 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@29 -- # CHAP_USER=chapo 00:10:41.660 02:05:50 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:10:41.660 02:05:50 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@24 -- # case ${opt} in 00:10:41.660 02:05:50 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@32 -- # CHAP_PASS=123456789123 00:10:41.660 02:05:50 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:10:41.660 02:05:50 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@24 -- # case ${opt} in 00:10:41.660 02:05:50 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@35 -- # CHAP_MUSER=mchapo 00:10:41.660 02:05:50 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:10:41.660 02:05:50 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@24 -- # case ${opt} in 00:10:41.660 02:05:50 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@38 -- # CHAP_MPASS=321978654321 00:10:41.660 02:05:50 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:10:41.660 02:05:50 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@24 -- # case ${opt} in 00:10:41.660 02:05:50 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@41 -- # DURING_DISCOVERY=1 00:10:41.660 02:05:50 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:10:41.660 02:05:50 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@24 -- # case ${opt} in 00:10:41.660 02:05:50 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@44 -- # DURING_LOGIN=1 00:10:41.660 02:05:50 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:10:41.660 02:05:50 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@24 -- # case ${opt} in 00:10:41.660 02:05:50 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@47 -- # BI_DIRECT=1 00:10:41.660 02:05:50 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@23 -- # getopts :t:u:s:r:m:dlb opt 00:10:41.660 02:05:50 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@86 -- # rpc_cmd iscsi_create_auth_group 2 00:10:41.660 02:05:50 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:41.660 02:05:50 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@10 -- # set +x 00:10:41.660 02:05:50 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:41.660 02:05:50 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@88 -- # '[' -z mchapo ']' 00:10:41.660 02:05:50 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@88 -- # '[' -z 321978654321 ']' 00:10:41.660 02:05:50 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@91 -- # rpc_cmd iscsi_auth_group_add_secret -u chapo -s 123456789123 -m mchapo -r 321978654321 2 00:10:41.660 02:05:50 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:41.660 02:05:50 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@10 -- # set +x 00:10:41.660 02:05:50 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:41.660 02:05:50 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@95 -- # '[' 1 -eq 1 ']' 00:10:41.660 02:05:50 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@96 -- # '[' 1 -eq 1 ']' 00:10:41.660 02:05:50 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@97 -- # rpc_cmd iscsi_target_node_set_auth -g 2 -r -m iqn.2016-06.io.spdk:disk1 00:10:41.660 02:05:50 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:41.660 02:05:50 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@10 -- # set +x 00:10:41.660 02:05:50 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:41.660 02:05:50 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@102 -- # '[' 1 -eq 1 ']' 00:10:41.660 02:05:50 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@103 -- # '[' 1 -eq 1 ']' 00:10:41.660 02:05:50 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@104 -- # rpc_cmd iscsi_set_discovery_auth -r -m -g 2 00:10:41.660 02:05:50 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:41.660 02:05:50 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@10 -- # set +x 00:10:41.660 02:05:50 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:41.660 executing discovery: 00:10:41.660 02:05:50 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@39 -- # echo 'executing discovery:' 00:10:41.660 02:05:50 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@40 -- # rc=0 00:10:41.660 02:05:50 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@41 -- # iscsiadm -m discovery -t sendtargets -p 10.0.0.1:3260 00:10:41.660 10.0.0.1:3260,1 iqn.2016-06.io.spdk:disk1 00:10:41.660 02:05:50 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@42 -- # '[' 0 -ne 0 ']' 00:10:41.660 executing login: 00:10:41.660 02:05:50 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@46 -- # echo 'executing login:' 00:10:41.660 02:05:50 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@47 -- # rc=0 00:10:41.660 02:05:50 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@48 -- # iscsiadm -m node -l -p 10.0.0.1:3260 00:10:41.660 Logging in to [iface: default, target: iqn.2016-06.io.spdk:disk1, portal: 10.0.0.1,3260] 00:10:41.660 Login to [iface: default, target: iqn.2016-06.io.spdk:disk1, portal: 10.0.0.1,3260] successful. 00:10:41.660 02:05:50 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@49 -- # '[' 0 -ne 0 ']' 00:10:41.660 DONE 00:10:41.660 02:05:50 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@54 -- # echo DONE 00:10:41.660 02:05:50 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@55 -- # default_initiator_chap_credentials 00:10:41.660 02:05:50 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@64 -- # iscsiadm -m node --logout 00:10:41.660 [2024-07-23 02:05:50.372908] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:41.660 Logging out of session [sid: 5, target: iqn.2016-06.io.spdk:disk1, portal: 10.0.0.1,3260] 00:10:41.660 Logout of [sid: 5, target: iqn.2016-06.io.spdk:disk1, portal: 10.0.0.1,3260] successful. 00:10:41.660 02:05:50 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@65 -- # iscsiadm -m node -o delete 00:10:41.919 02:05:50 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@67 -- # sed -i 's/^node.session.auth.authmethod = CHAP/#node.session.auth.authmethod = CHAP/' /etc/iscsi/iscsid.conf 00:10:41.919 02:05:50 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@68 -- # sed -i 's/^node.session.auth.username =.*/#node.session.auth.username = username/' /etc/iscsi/iscsid.conf 00:10:41.919 02:05:50 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@69 -- # sed -i 's/^node.session.auth.password =.*/#node.session.auth.password = password/' /etc/iscsi/iscsid.conf 00:10:41.919 02:05:50 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@70 -- # sed -i 's/^node.session.auth.username_in =.*/#node.session.auth.username_in = username_in/' /etc/iscsi/iscsid.conf 00:10:41.919 02:05:50 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@71 -- # sed -i 's/^node.session.auth.password_in =.*/#node.session.auth.password_in = password_in/' /etc/iscsi/iscsid.conf 00:10:41.919 02:05:50 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@73 -- # sed -i 's/^discovery.sendtargets.auth.authmethod = CHAP/#discovery.sendtargets.auth.authmethod = CHAP/' /etc/iscsi/iscsid.conf 00:10:41.919 02:05:50 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@74 -- # sed -i 's/^discovery.sendtargets.auth.username =.*/#discovery.sendtargets.auth.username = username/' /etc/iscsi/iscsid.conf 00:10:41.919 02:05:50 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@75 -- # sed -i 's/^discovery.sendtargets.auth.password =.*/#discovery.sendtargets.auth.password = password/' /etc/iscsi/iscsid.conf 00:10:41.919 02:05:50 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@76 -- # sed -i 's/^discovery.sendtargets.auth.username_in =.*/#discovery.sendtargets.auth.username_in = username_in/' /etc/iscsi/iscsid.conf 00:10:41.919 02:05:50 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@77 -- # sed -i 's/^discovery.sendtargets.auth.password_in =.*/#discovery.sendtargets.auth.password_in = password_in/' /etc/iscsi/iscsid.conf 00:10:41.919 02:05:50 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@78 -- # restart_iscsid 00:10:41.919 02:05:50 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@58 -- # sleep 3 00:10:45.202 02:05:53 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@59 -- # systemctl restart iscsid 00:10:45.202 02:05:53 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@60 -- # sleep 1 00:10:46.139 02:05:54 iscsi_tgt.chap_mutual_auth -- chap/chap_common.sh@79 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:10:46.139 02:05:54 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@57 -- # trap - SIGINT SIGTERM EXIT 00:10:46.139 02:05:54 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@59 -- # killprocess 66817 00:10:46.139 02:05:54 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@948 -- # '[' -z 66817 ']' 00:10:46.139 02:05:54 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@952 -- # kill -0 66817 00:10:46.139 02:05:54 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@953 -- # uname 00:10:46.139 02:05:54 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:46.139 02:05:54 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 66817 00:10:46.139 02:05:54 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:10:46.139 02:05:54 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:10:46.139 killing process with pid 66817 00:10:46.139 02:05:54 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 66817' 00:10:46.139 02:05:54 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@967 -- # kill 66817 00:10:46.139 02:05:54 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@972 -- # wait 66817 00:10:48.055 02:05:56 iscsi_tgt.chap_mutual_auth -- chap/chap_mutual_not_set.sh@61 -- # iscsitestfini 00:10:48.055 02:05:56 iscsi_tgt.chap_mutual_auth -- iscsi_tgt/common.sh@131 -- # '[' '' == iso ']' 00:10:48.055 00:10:48.056 real 0m17.357s 00:10:48.056 user 0m17.103s 00:10:48.056 sys 0m0.925s 00:10:48.056 02:05:56 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:48.056 02:05:56 iscsi_tgt.chap_mutual_auth -- common/autotest_common.sh@10 -- # set +x 00:10:48.056 ************************************ 00:10:48.056 END TEST chap_mutual_auth 00:10:48.056 ************************************ 00:10:48.056 02:05:56 iscsi_tgt -- common/autotest_common.sh@1142 -- # return 0 00:10:48.056 02:05:56 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@34 -- # run_test iscsi_tgt_reset /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/reset/reset.sh 00:10:48.056 02:05:56 iscsi_tgt -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:48.056 02:05:56 iscsi_tgt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:48.056 02:05:56 iscsi_tgt -- common/autotest_common.sh@10 -- # set +x 00:10:48.056 ************************************ 00:10:48.056 START TEST iscsi_tgt_reset 00:10:48.056 ************************************ 00:10:48.056 02:05:56 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/reset/reset.sh 00:10:48.056 * Looking for test storage... 00:10:48.056 * Found test storage at /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/reset 00:10:48.056 02:05:56 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:10:48.056 02:05:56 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:10:48.056 02:05:56 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:10:48.056 02:05:56 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:10:48.056 02:05:56 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:10:48.056 02:05:56 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:10:48.056 02:05:56 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:10:48.056 02:05:56 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:10:48.056 02:05:56 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:10:48.056 02:05:56 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:10:48.056 02:05:56 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:10:48.056 02:05:56 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:10:48.056 02:05:56 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:10:48.056 02:05:56 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:10:48.056 02:05:56 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:10:48.056 02:05:56 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:10:48.056 02:05:56 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:10:48.056 02:05:56 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:10:48.056 02:05:56 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:10:48.056 02:05:56 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:10:48.056 02:05:56 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@11 -- # iscsitestinit 00:10:48.056 02:05:56 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@109 -- # '[' '' == iso ']' 00:10:48.056 02:05:56 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@13 -- # MALLOC_BDEV_SIZE=64 00:10:48.056 02:05:56 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:10:48.056 02:05:56 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@16 -- # fio_py=/home/vagrant/spdk_repo/spdk/scripts/fio-wrapper 00:10:48.056 02:05:56 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@18 -- # hash sg_reset 00:10:48.056 02:05:56 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@22 -- # timing_enter start_iscsi_tgt 00:10:48.056 02:05:56 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:48.056 02:05:56 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@10 -- # set +x 00:10:48.056 02:05:56 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@25 -- # pid=67139 00:10:48.056 Process pid: 67139 00:10:48.056 02:05:56 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@26 -- # echo 'Process pid: 67139' 00:10:48.056 02:05:56 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@28 -- # trap 'killprocess $pid; exit 1' SIGINT SIGTERM EXIT 00:10:48.056 02:05:56 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@30 -- # waitforlisten 67139 00:10:48.056 02:05:56 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@24 -- # ip netns exec spdk_iscsi_ns /home/vagrant/spdk_repo/spdk/build/bin/iscsi_tgt --wait-for-rpc 00:10:48.056 02:05:56 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@829 -- # '[' -z 67139 ']' 00:10:48.056 02:05:56 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:48.056 02:05:56 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:48.056 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:48.056 02:05:56 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:48.056 02:05:56 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:48.056 02:05:56 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@10 -- # set +x 00:10:48.315 [2024-07-23 02:05:56.857394] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:10:48.315 [2024-07-23 02:05:56.857633] [ DPDK EAL parameters: iscsi --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67139 ] 00:10:48.315 [2024-07-23 02:05:57.036722] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:48.574 [2024-07-23 02:05:57.267167] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:49.141 02:05:57 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:49.141 02:05:57 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@862 -- # return 0 00:10:49.141 02:05:57 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@31 -- # rpc_cmd iscsi_set_options -o 30 -a 16 00:10:49.141 02:05:57 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:49.141 02:05:57 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@10 -- # set +x 00:10:49.141 02:05:57 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:49.141 02:05:57 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@32 -- # rpc_cmd framework_start_init 00:10:49.141 02:05:57 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:49.141 02:05:57 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@10 -- # set +x 00:10:49.709 02:05:58 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:49.709 iscsi_tgt is listening. Running tests... 00:10:49.709 02:05:58 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@33 -- # echo 'iscsi_tgt is listening. Running tests...' 00:10:49.709 02:05:58 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@35 -- # timing_exit start_iscsi_tgt 00:10:49.709 02:05:58 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:49.709 02:05:58 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@10 -- # set +x 00:10:49.709 02:05:58 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@37 -- # rpc_cmd iscsi_create_portal_group 1 10.0.0.1:3260 00:10:49.709 02:05:58 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:49.709 02:05:58 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@10 -- # set +x 00:10:49.709 02:05:58 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:49.709 02:05:58 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@38 -- # rpc_cmd iscsi_create_initiator_group 2 ANY 10.0.0.2/32 00:10:49.709 02:05:58 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:49.709 02:05:58 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@10 -- # set +x 00:10:49.709 02:05:58 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:49.709 02:05:58 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@39 -- # rpc_cmd bdev_malloc_create 64 512 00:10:49.709 02:05:58 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:49.709 02:05:58 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@10 -- # set +x 00:10:49.968 Malloc0 00:10:49.968 02:05:58 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:49.968 02:05:58 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@44 -- # rpc_cmd iscsi_create_target_node Target3 Target3_alias Malloc0:0 1:2 64 -d 00:10:49.968 02:05:58 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:49.968 02:05:58 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@10 -- # set +x 00:10:49.968 02:05:58 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:49.968 02:05:58 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@45 -- # sleep 1 00:10:50.905 02:05:59 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@47 -- # iscsiadm -m discovery -t sendtargets -p 10.0.0.1:3260 00:10:50.905 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target3 00:10:50.905 02:05:59 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@48 -- # iscsiadm -m node --login -p 10.0.0.1:3260 00:10:50.905 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] 00:10:50.905 Login to [iface: default, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] successful. 00:10:50.905 02:05:59 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@49 -- # waitforiscsidevices 1 00:10:50.905 02:05:59 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@116 -- # local num=1 00:10:50.905 02:05:59 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@118 -- # (( i = 1 )) 00:10:50.905 02:05:59 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@118 -- # (( i <= 20 )) 00:10:50.905 02:05:59 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@119 -- # iscsiadm -m session -P 3 00:10:50.905 02:05:59 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@119 -- # grep -c 'Attached scsi disk sd[a-z]*' 00:10:50.905 [2024-07-23 02:05:59.575905] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:50.905 02:05:59 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@119 -- # n=1 00:10:50.905 02:05:59 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@120 -- # '[' 1 -ne 1 ']' 00:10:50.905 02:05:59 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@123 -- # return 0 00:10:50.905 02:05:59 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@51 -- # iscsiadm -m session -P 3 00:10:50.905 02:05:59 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@51 -- # grep 'Attached scsi disk' 00:10:50.905 02:05:59 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@51 -- # awk '{print $4}' 00:10:50.905 FIO pid: 67208 00:10:50.905 02:05:59 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@51 -- # dev=sda 00:10:50.905 02:05:59 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@54 -- # fiopid=67208 00:10:50.905 02:05:59 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@55 -- # echo 'FIO pid: 67208' 00:10:50.905 02:05:59 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@57 -- # trap 'iscsicleanup; killprocess $pid; killprocess $fiopid; iscsitestfini; exit 1' SIGINT SIGTERM EXIT 00:10:50.905 02:05:59 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p iscsi -i 512 -d 1 -t read -r 60 00:10:50.905 02:05:59 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@60 -- # for i in 1 2 3 00:10:50.905 02:05:59 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@61 -- # sleep 1 00:10:50.905 [global] 00:10:50.905 thread=1 00:10:50.905 invalidate=1 00:10:50.905 rw=read 00:10:50.905 time_based=1 00:10:50.905 runtime=60 00:10:50.905 ioengine=libaio 00:10:50.905 direct=1 00:10:50.905 bs=512 00:10:50.905 iodepth=1 00:10:50.905 norandommap=1 00:10:50.905 numjobs=1 00:10:50.905 00:10:50.905 [job0] 00:10:50.905 filename=/dev/sda 00:10:50.905 queue_depth set to 113 (sda) 00:10:51.164 job0: (g=0): rw=read, bs=(R) 512B-512B, (W) 512B-512B, (T) 512B-512B, ioengine=libaio, iodepth=1 00:10:51.164 fio-3.35 00:10:51.164 Starting 1 thread 00:10:52.103 02:06:00 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@62 -- # kill -s 0 67139 00:10:52.103 02:06:00 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@63 -- # kill -s 0 67208 00:10:52.103 02:06:00 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@64 -- # sg_reset -d /dev/sda 00:10:52.103 [2024-07-23 02:06:00.599015] iscsi.c:3690:iscsi_pdu_hdr_op_task: *NOTICE*: LOGICAL_UNIT_RESET 00:10:52.103 [2024-07-23 02:06:00.599156] lun.c: 157:_scsi_lun_execute_mgmt_task: *NOTICE*: Bdev scsi reset on lun reset 00:10:52.103 02:06:00 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@65 -- # sleep 1 00:10:52.103 [2024-07-23 02:06:00.605095] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:53.040 02:06:01 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@66 -- # kill -s 0 67139 00:10:53.040 02:06:01 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@67 -- # kill -s 0 67208 00:10:53.040 02:06:01 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@60 -- # for i in 1 2 3 00:10:53.040 02:06:01 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@61 -- # sleep 1 00:10:53.974 02:06:02 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@62 -- # kill -s 0 67139 00:10:53.974 02:06:02 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@63 -- # kill -s 0 67208 00:10:53.975 02:06:02 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@64 -- # sg_reset -d /dev/sda 00:10:53.975 [2024-07-23 02:06:02.610426] iscsi.c:3690:iscsi_pdu_hdr_op_task: *NOTICE*: LOGICAL_UNIT_RESET 00:10:53.975 [2024-07-23 02:06:02.610546] lun.c: 157:_scsi_lun_execute_mgmt_task: *NOTICE*: Bdev scsi reset on lun reset 00:10:53.975 02:06:02 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@65 -- # sleep 1 00:10:53.975 [2024-07-23 02:06:02.612138] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:54.907 02:06:03 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@66 -- # kill -s 0 67139 00:10:54.907 02:06:03 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@67 -- # kill -s 0 67208 00:10:54.907 02:06:03 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@60 -- # for i in 1 2 3 00:10:54.907 02:06:03 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@61 -- # sleep 1 00:10:56.280 02:06:04 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@62 -- # kill -s 0 67139 00:10:56.280 02:06:04 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@63 -- # kill -s 0 67208 00:10:56.280 02:06:04 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@64 -- # sg_reset -d /dev/sda 00:10:56.280 [2024-07-23 02:06:04.621830] iscsi.c:3690:iscsi_pdu_hdr_op_task: *NOTICE*: LOGICAL_UNIT_RESET 00:10:56.280 [2024-07-23 02:06:04.621925] lun.c: 157:_scsi_lun_execute_mgmt_task: *NOTICE*: Bdev scsi reset on lun reset 00:10:56.280 [2024-07-23 02:06:04.623267] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:10:56.280 02:06:04 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@65 -- # sleep 1 00:10:56.856 Cleaning up iSCSI connection 00:10:56.856 02:06:05 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@66 -- # kill -s 0 67139 00:10:56.856 02:06:05 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@67 -- # kill -s 0 67208 00:10:56.856 02:06:05 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@70 -- # kill 67208 00:10:56.856 02:06:05 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@71 -- # wait 67208 00:10:56.856 02:06:05 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@71 -- # true 00:10:56.856 02:06:05 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@73 -- # trap - SIGINT SIGTERM EXIT 00:10:56.856 02:06:05 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@75 -- # iscsicleanup 00:10:56.856 02:06:05 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@980 -- # echo 'Cleaning up iSCSI connection' 00:10:56.856 02:06:05 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@981 -- # iscsiadm -m node --logout 00:10:57.113 fio: pid=67234, err=19/file:io_u.c:1889, func=io_u error, error=No such device 00:10:57.113 fio: io_u error on file /dev/sda: No such device: read offset=32216064, buflen=512 00:10:57.113 00:10:57.113 job0: (groupid=0, jobs=1): err=19 (file:io_u.c:1889, func=io_u error, error=No such device): pid=67234: Tue Jul 23 02:06:05 2024 00:10:57.113 read: IOPS=10.9k, BW=5453KiB/s (5584kB/s)(30.7MiB/5769msec) 00:10:57.113 slat (usec): min=3, max=1292, avg= 6.12, stdev= 5.77 00:10:57.113 clat (usec): min=66, max=5939, avg=84.95, stdev=33.46 00:10:57.113 lat (usec): min=72, max=5944, avg=91.04, stdev=33.70 00:10:57.113 clat percentiles (usec): 00:10:57.113 | 1.00th=[ 76], 5.00th=[ 76], 10.00th=[ 77], 20.00th=[ 77], 00:10:57.113 | 30.00th=[ 77], 40.00th=[ 78], 50.00th=[ 79], 60.00th=[ 81], 00:10:57.113 | 70.00th=[ 85], 80.00th=[ 92], 90.00th=[ 103], 95.00th=[ 114], 00:10:57.113 | 99.00th=[ 139], 99.50th=[ 153], 99.90th=[ 241], 99.95th=[ 297], 00:10:57.113 | 99.99th=[ 1156] 00:10:57.113 bw ( KiB/s): min= 4693, max= 5668, per=100.00%, avg=5459.36, stdev=284.47, samples=11 00:10:57.113 iops : min= 9387, max=11336, avg=10918.91, stdev=568.72, samples=11 00:10:57.113 lat (usec) : 100=88.18%, 250=11.73%, 500=0.07%, 750=0.01%, 1000=0.01% 00:10:57.113 lat (msec) : 2=0.01%, 4=0.01%, 10=0.01% 00:10:57.113 cpu : usr=2.51%, sys=8.62%, ctx=62923, majf=0, minf=1 00:10:57.113 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:57.113 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:57.114 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:57.114 issued rwts: total=62923,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:57.114 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:57.114 00:10:57.114 Run status group 0 (all jobs): 00:10:57.114 READ: bw=5453KiB/s (5584kB/s), 5453KiB/s-5453KiB/s (5584kB/s-5584kB/s), io=30.7MiB (32.2MB), run=5769-5769msec 00:10:57.114 00:10:57.114 Disk stats (read/write): 00:10:57.114 sda: ios=61643/0, merge=0/0, ticks=5224/0, in_queue=5224, util=98.36% 00:10:57.114 Logging out of session [sid: 6, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] 00:10:57.114 Logout of [sid: 6, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] successful. 00:10:57.114 02:06:05 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@982 -- # iscsiadm -m node -o delete 00:10:57.114 02:06:05 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@983 -- # rm -rf 00:10:57.114 02:06:05 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@76 -- # killprocess 67139 00:10:57.114 02:06:05 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@948 -- # '[' -z 67139 ']' 00:10:57.114 02:06:05 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@952 -- # kill -0 67139 00:10:57.114 02:06:05 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@953 -- # uname 00:10:57.114 02:06:05 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:57.114 02:06:05 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 67139 00:10:57.114 killing process with pid 67139 00:10:57.114 02:06:05 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:57.114 02:06:05 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:57.114 02:06:05 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@966 -- # echo 'killing process with pid 67139' 00:10:57.114 02:06:05 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@967 -- # kill 67139 00:10:57.114 02:06:05 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@972 -- # wait 67139 00:10:59.644 02:06:07 iscsi_tgt.iscsi_tgt_reset -- reset/reset.sh@77 -- # iscsitestfini 00:10:59.644 02:06:07 iscsi_tgt.iscsi_tgt_reset -- iscsi_tgt/common.sh@131 -- # '[' '' == iso ']' 00:10:59.644 00:10:59.644 real 0m11.282s 00:10:59.644 user 0m8.538s 00:10:59.644 sys 0m2.337s 00:10:59.645 02:06:07 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:59.645 02:06:07 iscsi_tgt.iscsi_tgt_reset -- common/autotest_common.sh@10 -- # set +x 00:10:59.645 ************************************ 00:10:59.645 END TEST iscsi_tgt_reset 00:10:59.645 ************************************ 00:10:59.645 02:06:07 iscsi_tgt -- common/autotest_common.sh@1142 -- # return 0 00:10:59.645 02:06:07 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@35 -- # run_test iscsi_tgt_rpc_config /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/rpc_config/rpc_config.sh 00:10:59.645 02:06:07 iscsi_tgt -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:10:59.645 02:06:07 iscsi_tgt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:59.645 02:06:07 iscsi_tgt -- common/autotest_common.sh@10 -- # set +x 00:10:59.645 ************************************ 00:10:59.645 START TEST iscsi_tgt_rpc_config 00:10:59.645 ************************************ 00:10:59.645 02:06:07 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/rpc_config/rpc_config.sh 00:10:59.645 * Looking for test storage... 00:10:59.645 * Found test storage at /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/rpc_config 00:10:59.645 02:06:08 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:10:59.645 02:06:08 iscsi_tgt.iscsi_tgt_rpc_config -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:10:59.645 02:06:08 iscsi_tgt.iscsi_tgt_rpc_config -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:10:59.645 02:06:08 iscsi_tgt.iscsi_tgt_rpc_config -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:10:59.645 02:06:08 iscsi_tgt.iscsi_tgt_rpc_config -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:10:59.645 02:06:08 iscsi_tgt.iscsi_tgt_rpc_config -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:10:59.645 02:06:08 iscsi_tgt.iscsi_tgt_rpc_config -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:10:59.645 02:06:08 iscsi_tgt.iscsi_tgt_rpc_config -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:10:59.645 02:06:08 iscsi_tgt.iscsi_tgt_rpc_config -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:10:59.645 02:06:08 iscsi_tgt.iscsi_tgt_rpc_config -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:10:59.645 02:06:08 iscsi_tgt.iscsi_tgt_rpc_config -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:10:59.645 02:06:08 iscsi_tgt.iscsi_tgt_rpc_config -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:10:59.645 02:06:08 iscsi_tgt.iscsi_tgt_rpc_config -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:10:59.645 02:06:08 iscsi_tgt.iscsi_tgt_rpc_config -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:10:59.645 02:06:08 iscsi_tgt.iscsi_tgt_rpc_config -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:10:59.645 02:06:08 iscsi_tgt.iscsi_tgt_rpc_config -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:10:59.645 02:06:08 iscsi_tgt.iscsi_tgt_rpc_config -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:10:59.645 02:06:08 iscsi_tgt.iscsi_tgt_rpc_config -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:10:59.645 02:06:08 iscsi_tgt.iscsi_tgt_rpc_config -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:10:59.645 02:06:08 iscsi_tgt.iscsi_tgt_rpc_config -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:10:59.645 02:06:08 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@11 -- # iscsitestinit 00:10:59.645 02:06:08 iscsi_tgt.iscsi_tgt_rpc_config -- iscsi_tgt/common.sh@109 -- # '[' '' == iso ']' 00:10:59.645 02:06:08 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@13 -- # MALLOC_BDEV_SIZE=64 00:10:59.645 02:06:08 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:59.645 02:06:08 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@16 -- # rpc_config_py=/home/vagrant/spdk_repo/spdk/test/iscsi_tgt/rpc_config/rpc_config.py 00:10:59.645 02:06:08 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@18 -- # timing_enter start_iscsi_tgt 00:10:59.645 02:06:08 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:59.645 02:06:08 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@10 -- # set +x 00:10:59.645 02:06:08 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@21 -- # pid=67404 00:10:59.645 Process pid: 67404 00:10:59.645 02:06:08 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@22 -- # echo 'Process pid: 67404' 00:10:59.645 02:06:08 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@24 -- # trap 'killprocess $pid; exit 1' SIGINT SIGTERM EXIT 00:10:59.645 02:06:08 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@26 -- # waitforlisten 67404 00:10:59.645 02:06:08 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@829 -- # '[' -z 67404 ']' 00:10:59.645 02:06:08 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@20 -- # ip netns exec spdk_iscsi_ns /home/vagrant/spdk_repo/spdk/build/bin/iscsi_tgt --wait-for-rpc 00:10:59.645 02:06:08 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:59.645 02:06:08 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:59.645 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:59.645 02:06:08 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:59.645 02:06:08 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:59.645 02:06:08 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@10 -- # set +x 00:10:59.645 [2024-07-23 02:06:08.189323] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:10:59.645 [2024-07-23 02:06:08.189554] [ DPDK EAL parameters: iscsi --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67404 ] 00:10:59.645 [2024-07-23 02:06:08.364404] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:59.904 [2024-07-23 02:06:08.577906] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:00.472 02:06:09 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:00.472 02:06:09 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@862 -- # return 0 00:11:00.472 02:06:09 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@28 -- # rpc_wait_pid=67420 00:11:00.472 02:06:09 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_wait_init 00:11:00.472 02:06:09 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_set_options -o 30 -a 16 00:11:00.731 02:06:09 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@32 -- # ps 67420 00:11:00.731 PID TTY STAT TIME COMMAND 00:11:00.731 67420 ? S 0:00 python3 /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_wait_init 00:11:00.731 02:06:09 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:11:01.668 02:06:10 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@35 -- # sleep 1 00:11:02.602 iscsi_tgt is listening. Running tests... 00:11:02.602 02:06:11 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@36 -- # echo 'iscsi_tgt is listening. Running tests...' 00:11:02.602 02:06:11 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@39 -- # NOT ps 67420 00:11:02.602 02:06:11 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@648 -- # local es=0 00:11:02.602 02:06:11 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@650 -- # valid_exec_arg ps 67420 00:11:02.602 02:06:11 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@636 -- # local arg=ps 00:11:02.602 02:06:11 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:02.602 02:06:11 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@640 -- # type -t ps 00:11:02.602 02:06:11 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:02.602 02:06:11 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@642 -- # type -P ps 00:11:02.602 02:06:11 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:02.602 02:06:11 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@642 -- # arg=/usr/bin/ps 00:11:02.602 02:06:11 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@642 -- # [[ -x /usr/bin/ps ]] 00:11:02.602 02:06:11 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@651 -- # ps 67420 00:11:02.602 PID TTY STAT TIME COMMAND 00:11:02.602 02:06:11 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@651 -- # es=1 00:11:02.602 02:06:11 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:02.602 02:06:11 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:02.602 02:06:11 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:02.602 02:06:11 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@43 -- # rpc_wait_pid=67450 00:11:02.602 02:06:11 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_wait_init 00:11:02.602 02:06:11 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@44 -- # sleep 1 00:11:03.539 02:06:12 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@45 -- # NOT ps 67450 00:11:03.539 02:06:12 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@648 -- # local es=0 00:11:03.539 02:06:12 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@650 -- # valid_exec_arg ps 67450 00:11:03.539 02:06:12 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@636 -- # local arg=ps 00:11:03.539 02:06:12 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:03.539 02:06:12 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@640 -- # type -t ps 00:11:03.539 02:06:12 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:03.539 02:06:12 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@642 -- # type -P ps 00:11:03.539 02:06:12 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:03.539 02:06:12 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@642 -- # arg=/usr/bin/ps 00:11:03.539 02:06:12 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@642 -- # [[ -x /usr/bin/ps ]] 00:11:03.539 02:06:12 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@651 -- # ps 67450 00:11:03.539 PID TTY STAT TIME COMMAND 00:11:03.539 02:06:12 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@651 -- # es=1 00:11:03.539 02:06:12 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:03.539 02:06:12 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:03.539 02:06:12 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:03.539 02:06:12 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@47 -- # timing_exit start_iscsi_tgt 00:11:03.539 02:06:12 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:03.539 02:06:12 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@10 -- # set +x 00:11:03.798 02:06:12 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@49 -- # /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/rpc_config/rpc_config.py /home/vagrant/spdk_repo/spdk/scripts/rpc.py 10.0.0.1 10.0.0.2 3260 10.0.0.2/32 spdk_iscsi_ns 00:11:25.729 [2024-07-23 02:06:33.910892] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:11:28.304 [2024-07-23 02:06:36.718340] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:11:29.682 verify_log_flag_rpc_methods passed 00:11:29.682 create_malloc_bdevs_rpc_methods passed 00:11:29.682 verify_portal_groups_rpc_methods passed 00:11:29.682 verify_initiator_groups_rpc_method passed. 00:11:29.682 This issue will be fixed later. 00:11:29.682 verify_target_nodes_rpc_methods passed. 00:11:29.682 verify_scsi_devices_rpc_methods passed 00:11:29.682 verify_iscsi_connection_rpc_methods passed 00:11:29.682 02:06:38 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:11:29.943 [ 00:11:29.943 { 00:11:29.943 "name": "Malloc0", 00:11:29.943 "aliases": [ 00:11:29.943 "2d36a065-c819-4027-9b96-1c478367ea2a" 00:11:29.943 ], 00:11:29.943 "product_name": "Malloc disk", 00:11:29.943 "block_size": 512, 00:11:29.943 "num_blocks": 131072, 00:11:29.943 "uuid": "2d36a065-c819-4027-9b96-1c478367ea2a", 00:11:29.943 "assigned_rate_limits": { 00:11:29.943 "rw_ios_per_sec": 0, 00:11:29.943 "rw_mbytes_per_sec": 0, 00:11:29.943 "r_mbytes_per_sec": 0, 00:11:29.943 "w_mbytes_per_sec": 0 00:11:29.943 }, 00:11:29.943 "claimed": false, 00:11:29.943 "zoned": false, 00:11:29.943 "supported_io_types": { 00:11:29.943 "read": true, 00:11:29.943 "write": true, 00:11:29.943 "unmap": true, 00:11:29.943 "flush": true, 00:11:29.943 "reset": true, 00:11:29.943 "nvme_admin": false, 00:11:29.943 "nvme_io": false, 00:11:29.943 "nvme_io_md": false, 00:11:29.943 "write_zeroes": true, 00:11:29.943 "zcopy": true, 00:11:29.943 "get_zone_info": false, 00:11:29.943 "zone_management": false, 00:11:29.943 "zone_append": false, 00:11:29.943 "compare": false, 00:11:29.943 "compare_and_write": false, 00:11:29.943 "abort": true, 00:11:29.943 "seek_hole": false, 00:11:29.943 "seek_data": false, 00:11:29.943 "copy": true, 00:11:29.943 "nvme_iov_md": false 00:11:29.943 }, 00:11:29.943 "memory_domains": [ 00:11:29.943 { 00:11:29.943 "dma_device_id": "system", 00:11:29.943 "dma_device_type": 1 00:11:29.943 }, 00:11:29.943 { 00:11:29.943 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:29.943 "dma_device_type": 2 00:11:29.943 } 00:11:29.943 ], 00:11:29.943 "driver_specific": {} 00:11:29.943 }, 00:11:29.943 { 00:11:29.943 "name": "Malloc1", 00:11:29.943 "aliases": [ 00:11:29.943 "83369fb8-1073-44e7-8e4d-0c57891370fa" 00:11:29.943 ], 00:11:29.943 "product_name": "Malloc disk", 00:11:29.943 "block_size": 512, 00:11:29.943 "num_blocks": 131072, 00:11:29.943 "uuid": "83369fb8-1073-44e7-8e4d-0c57891370fa", 00:11:29.943 "assigned_rate_limits": { 00:11:29.943 "rw_ios_per_sec": 0, 00:11:29.943 "rw_mbytes_per_sec": 0, 00:11:29.943 "r_mbytes_per_sec": 0, 00:11:29.943 "w_mbytes_per_sec": 0 00:11:29.943 }, 00:11:29.943 "claimed": false, 00:11:29.943 "zoned": false, 00:11:29.943 "supported_io_types": { 00:11:29.943 "read": true, 00:11:29.943 "write": true, 00:11:29.943 "unmap": true, 00:11:29.943 "flush": true, 00:11:29.943 "reset": true, 00:11:29.943 "nvme_admin": false, 00:11:29.943 "nvme_io": false, 00:11:29.943 "nvme_io_md": false, 00:11:29.943 "write_zeroes": true, 00:11:29.943 "zcopy": true, 00:11:29.943 "get_zone_info": false, 00:11:29.943 "zone_management": false, 00:11:29.943 "zone_append": false, 00:11:29.943 "compare": false, 00:11:29.943 "compare_and_write": false, 00:11:29.943 "abort": true, 00:11:29.943 "seek_hole": false, 00:11:29.943 "seek_data": false, 00:11:29.943 "copy": true, 00:11:29.943 "nvme_iov_md": false 00:11:29.943 }, 00:11:29.943 "memory_domains": [ 00:11:29.943 { 00:11:29.943 "dma_device_id": "system", 00:11:29.943 "dma_device_type": 1 00:11:29.943 }, 00:11:29.943 { 00:11:29.943 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:29.943 "dma_device_type": 2 00:11:29.943 } 00:11:29.943 ], 00:11:29.943 "driver_specific": {} 00:11:29.943 }, 00:11:29.943 { 00:11:29.943 "name": "Malloc2", 00:11:29.943 "aliases": [ 00:11:29.943 "18a1cc1f-c6dd-4ee2-92da-97e9d2e9e240" 00:11:29.943 ], 00:11:29.943 "product_name": "Malloc disk", 00:11:29.943 "block_size": 512, 00:11:29.943 "num_blocks": 131072, 00:11:29.943 "uuid": "18a1cc1f-c6dd-4ee2-92da-97e9d2e9e240", 00:11:29.943 "assigned_rate_limits": { 00:11:29.943 "rw_ios_per_sec": 0, 00:11:29.943 "rw_mbytes_per_sec": 0, 00:11:29.943 "r_mbytes_per_sec": 0, 00:11:29.943 "w_mbytes_per_sec": 0 00:11:29.943 }, 00:11:29.943 "claimed": false, 00:11:29.943 "zoned": false, 00:11:29.943 "supported_io_types": { 00:11:29.943 "read": true, 00:11:29.943 "write": true, 00:11:29.943 "unmap": true, 00:11:29.943 "flush": true, 00:11:29.943 "reset": true, 00:11:29.943 "nvme_admin": false, 00:11:29.943 "nvme_io": false, 00:11:29.943 "nvme_io_md": false, 00:11:29.943 "write_zeroes": true, 00:11:29.943 "zcopy": true, 00:11:29.943 "get_zone_info": false, 00:11:29.943 "zone_management": false, 00:11:29.943 "zone_append": false, 00:11:29.943 "compare": false, 00:11:29.943 "compare_and_write": false, 00:11:29.943 "abort": true, 00:11:29.943 "seek_hole": false, 00:11:29.943 "seek_data": false, 00:11:29.943 "copy": true, 00:11:29.943 "nvme_iov_md": false 00:11:29.943 }, 00:11:29.943 "memory_domains": [ 00:11:29.943 { 00:11:29.943 "dma_device_id": "system", 00:11:29.943 "dma_device_type": 1 00:11:29.943 }, 00:11:29.943 { 00:11:29.943 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:29.943 "dma_device_type": 2 00:11:29.943 } 00:11:29.943 ], 00:11:29.943 "driver_specific": {} 00:11:29.943 }, 00:11:29.943 { 00:11:29.943 "name": "Malloc3", 00:11:29.943 "aliases": [ 00:11:29.943 "dcd84c95-8d9c-45cd-84ce-b18018bd78b1" 00:11:29.943 ], 00:11:29.943 "product_name": "Malloc disk", 00:11:29.943 "block_size": 512, 00:11:29.943 "num_blocks": 131072, 00:11:29.943 "uuid": "dcd84c95-8d9c-45cd-84ce-b18018bd78b1", 00:11:29.943 "assigned_rate_limits": { 00:11:29.943 "rw_ios_per_sec": 0, 00:11:29.943 "rw_mbytes_per_sec": 0, 00:11:29.943 "r_mbytes_per_sec": 0, 00:11:29.943 "w_mbytes_per_sec": 0 00:11:29.943 }, 00:11:29.943 "claimed": false, 00:11:29.944 "zoned": false, 00:11:29.944 "supported_io_types": { 00:11:29.944 "read": true, 00:11:29.944 "write": true, 00:11:29.944 "unmap": true, 00:11:29.944 "flush": true, 00:11:29.944 "reset": true, 00:11:29.944 "nvme_admin": false, 00:11:29.944 "nvme_io": false, 00:11:29.944 "nvme_io_md": false, 00:11:29.944 "write_zeroes": true, 00:11:29.944 "zcopy": true, 00:11:29.944 "get_zone_info": false, 00:11:29.944 "zone_management": false, 00:11:29.944 "zone_append": false, 00:11:29.944 "compare": false, 00:11:29.944 "compare_and_write": false, 00:11:29.944 "abort": true, 00:11:29.944 "seek_hole": false, 00:11:29.944 "seek_data": false, 00:11:29.944 "copy": true, 00:11:29.944 "nvme_iov_md": false 00:11:29.944 }, 00:11:29.944 "memory_domains": [ 00:11:29.944 { 00:11:29.944 "dma_device_id": "system", 00:11:29.944 "dma_device_type": 1 00:11:29.944 }, 00:11:29.944 { 00:11:29.944 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:29.944 "dma_device_type": 2 00:11:29.944 } 00:11:29.944 ], 00:11:29.944 "driver_specific": {} 00:11:29.944 }, 00:11:29.944 { 00:11:29.944 "name": "Malloc4", 00:11:29.944 "aliases": [ 00:11:29.944 "8d4f6fca-700f-45ba-85e2-0591b196210c" 00:11:29.944 ], 00:11:29.944 "product_name": "Malloc disk", 00:11:29.944 "block_size": 512, 00:11:29.944 "num_blocks": 131072, 00:11:29.944 "uuid": "8d4f6fca-700f-45ba-85e2-0591b196210c", 00:11:29.944 "assigned_rate_limits": { 00:11:29.944 "rw_ios_per_sec": 0, 00:11:29.944 "rw_mbytes_per_sec": 0, 00:11:29.944 "r_mbytes_per_sec": 0, 00:11:29.944 "w_mbytes_per_sec": 0 00:11:29.944 }, 00:11:29.944 "claimed": false, 00:11:29.944 "zoned": false, 00:11:29.944 "supported_io_types": { 00:11:29.944 "read": true, 00:11:29.944 "write": true, 00:11:29.944 "unmap": true, 00:11:29.944 "flush": true, 00:11:29.944 "reset": true, 00:11:29.944 "nvme_admin": false, 00:11:29.944 "nvme_io": false, 00:11:29.944 "nvme_io_md": false, 00:11:29.944 "write_zeroes": true, 00:11:29.944 "zcopy": true, 00:11:29.944 "get_zone_info": false, 00:11:29.944 "zone_management": false, 00:11:29.944 "zone_append": false, 00:11:29.944 "compare": false, 00:11:29.944 "compare_and_write": false, 00:11:29.944 "abort": true, 00:11:29.944 "seek_hole": false, 00:11:29.944 "seek_data": false, 00:11:29.944 "copy": true, 00:11:29.944 "nvme_iov_md": false 00:11:29.944 }, 00:11:29.944 "memory_domains": [ 00:11:29.944 { 00:11:29.944 "dma_device_id": "system", 00:11:29.944 "dma_device_type": 1 00:11:29.944 }, 00:11:29.944 { 00:11:29.944 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:29.944 "dma_device_type": 2 00:11:29.944 } 00:11:29.944 ], 00:11:29.944 "driver_specific": {} 00:11:29.944 }, 00:11:29.944 { 00:11:29.944 "name": "Malloc5", 00:11:29.944 "aliases": [ 00:11:29.944 "5fc060ba-36d1-49a0-82d7-51bd767c7968" 00:11:29.944 ], 00:11:29.944 "product_name": "Malloc disk", 00:11:29.944 "block_size": 512, 00:11:29.944 "num_blocks": 131072, 00:11:29.944 "uuid": "5fc060ba-36d1-49a0-82d7-51bd767c7968", 00:11:29.944 "assigned_rate_limits": { 00:11:29.944 "rw_ios_per_sec": 0, 00:11:29.944 "rw_mbytes_per_sec": 0, 00:11:29.944 "r_mbytes_per_sec": 0, 00:11:29.944 "w_mbytes_per_sec": 0 00:11:29.944 }, 00:11:29.944 "claimed": false, 00:11:29.944 "zoned": false, 00:11:29.944 "supported_io_types": { 00:11:29.944 "read": true, 00:11:29.944 "write": true, 00:11:29.944 "unmap": true, 00:11:29.944 "flush": true, 00:11:29.944 "reset": true, 00:11:29.944 "nvme_admin": false, 00:11:29.944 "nvme_io": false, 00:11:29.944 "nvme_io_md": false, 00:11:29.944 "write_zeroes": true, 00:11:29.944 "zcopy": true, 00:11:29.944 "get_zone_info": false, 00:11:29.944 "zone_management": false, 00:11:29.944 "zone_append": false, 00:11:29.944 "compare": false, 00:11:29.944 "compare_and_write": false, 00:11:29.944 "abort": true, 00:11:29.944 "seek_hole": false, 00:11:29.944 "seek_data": false, 00:11:29.944 "copy": true, 00:11:29.944 "nvme_iov_md": false 00:11:29.944 }, 00:11:29.944 "memory_domains": [ 00:11:29.944 { 00:11:29.944 "dma_device_id": "system", 00:11:29.944 "dma_device_type": 1 00:11:29.944 }, 00:11:29.944 { 00:11:29.944 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:29.944 "dma_device_type": 2 00:11:29.944 } 00:11:29.944 ], 00:11:29.944 "driver_specific": {} 00:11:29.944 } 00:11:29.944 ] 00:11:29.944 Cleaning up iSCSI connection 00:11:29.944 02:06:38 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@53 -- # trap - SIGINT SIGTERM EXIT 00:11:29.944 02:06:38 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@55 -- # iscsicleanup 00:11:29.944 02:06:38 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@980 -- # echo 'Cleaning up iSCSI connection' 00:11:29.944 02:06:38 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@981 -- # iscsiadm -m node --logout 00:11:29.944 iscsiadm: No matching sessions found 00:11:29.944 02:06:38 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@981 -- # true 00:11:29.944 02:06:38 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@982 -- # iscsiadm -m node -o delete 00:11:29.944 iscsiadm: No records found 00:11:29.944 02:06:38 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@982 -- # true 00:11:29.944 02:06:38 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@983 -- # rm -rf 00:11:29.944 02:06:38 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@56 -- # killprocess 67404 00:11:29.944 02:06:38 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@948 -- # '[' -z 67404 ']' 00:11:29.944 02:06:38 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@952 -- # kill -0 67404 00:11:29.944 02:06:38 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@953 -- # uname 00:11:29.944 02:06:38 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:29.944 02:06:38 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 67404 00:11:29.944 killing process with pid 67404 00:11:29.944 02:06:38 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:29.944 02:06:38 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:29.944 02:06:38 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@966 -- # echo 'killing process with pid 67404' 00:11:29.944 02:06:38 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@967 -- # kill 67404 00:11:29.944 02:06:38 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@972 -- # wait 67404 00:11:32.480 02:06:41 iscsi_tgt.iscsi_tgt_rpc_config -- rpc_config/rpc_config.sh@58 -- # iscsitestfini 00:11:32.480 02:06:41 iscsi_tgt.iscsi_tgt_rpc_config -- iscsi_tgt/common.sh@131 -- # '[' '' == iso ']' 00:11:32.480 ************************************ 00:11:32.480 END TEST iscsi_tgt_rpc_config 00:11:32.480 ************************************ 00:11:32.480 00:11:32.480 real 0m33.278s 00:11:32.480 user 0m54.891s 00:11:32.480 sys 0m4.385s 00:11:32.480 02:06:41 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:32.480 02:06:41 iscsi_tgt.iscsi_tgt_rpc_config -- common/autotest_common.sh@10 -- # set +x 00:11:32.740 02:06:41 iscsi_tgt -- common/autotest_common.sh@1142 -- # return 0 00:11:32.740 02:06:41 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@36 -- # run_test iscsi_tgt_iscsi_lvol /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/lvol/iscsi_lvol.sh 00:11:32.740 02:06:41 iscsi_tgt -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:11:32.740 02:06:41 iscsi_tgt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:32.740 02:06:41 iscsi_tgt -- common/autotest_common.sh@10 -- # set +x 00:11:32.740 ************************************ 00:11:32.740 START TEST iscsi_tgt_iscsi_lvol 00:11:32.740 ************************************ 00:11:32.740 02:06:41 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/lvol/iscsi_lvol.sh 00:11:32.740 * Looking for test storage... 00:11:32.740 * Found test storage at /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/lvol 00:11:32.740 02:06:41 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:11:32.740 02:06:41 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:11:32.740 02:06:41 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:11:32.740 02:06:41 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:11:32.740 02:06:41 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:11:32.740 02:06:41 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:11:32.740 02:06:41 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:11:32.740 02:06:41 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:11:32.740 02:06:41 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:11:32.740 02:06:41 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:11:32.740 02:06:41 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:11:32.740 02:06:41 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:11:32.740 02:06:41 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:11:32.740 02:06:41 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:11:32.740 02:06:41 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:11:32.740 02:06:41 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:11:32.740 02:06:41 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:11:32.740 02:06:41 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:11:32.740 02:06:41 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:11:32.740 02:06:41 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:11:32.740 02:06:41 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@11 -- # iscsitestinit 00:11:32.740 02:06:41 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@109 -- # '[' '' == iso ']' 00:11:32.740 02:06:41 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@13 -- # MALLOC_BDEV_SIZE=128 00:11:32.740 02:06:41 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:11:32.740 02:06:41 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@15 -- # '[' 1 -eq 1 ']' 00:11:32.740 02:06:41 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@16 -- # NUM_LVS=10 00:11:32.740 02:06:41 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@17 -- # NUM_LVOL=10 00:11:32.740 02:06:41 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@23 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:32.740 02:06:41 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@24 -- # fio_py=/home/vagrant/spdk_repo/spdk/scripts/fio-wrapper 00:11:32.740 02:06:41 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@26 -- # timing_enter start_iscsi_tgt 00:11:32.740 02:06:41 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:32.740 02:06:41 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@10 -- # set +x 00:11:32.740 Process pid: 68002 00:11:32.740 02:06:41 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@29 -- # pid=68002 00:11:32.740 02:06:41 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@30 -- # echo 'Process pid: 68002' 00:11:32.740 02:06:41 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@32 -- # trap 'iscsicleanup; killprocess $pid; iscsitestfini; exit 1' SIGINT SIGTERM EXIT 00:11:32.740 02:06:41 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@28 -- # ip netns exec spdk_iscsi_ns /home/vagrant/spdk_repo/spdk/build/bin/iscsi_tgt -m 0xF --wait-for-rpc 00:11:32.740 02:06:41 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@34 -- # waitforlisten 68002 00:11:32.740 02:06:41 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@829 -- # '[' -z 68002 ']' 00:11:32.740 02:06:41 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:32.740 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:32.740 02:06:41 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:32.740 02:06:41 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:32.740 02:06:41 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:32.740 02:06:41 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@10 -- # set +x 00:11:32.999 [2024-07-23 02:06:41.527903] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:11:32.999 [2024-07-23 02:06:41.528124] [ DPDK EAL parameters: iscsi --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68002 ] 00:11:32.999 [2024-07-23 02:06:41.703521] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:33.257 [2024-07-23 02:06:41.903921] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:33.257 [2024-07-23 02:06:41.904096] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:33.257 [2024-07-23 02:06:41.904224] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:33.257 [2024-07-23 02:06:41.904397] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:33.825 02:06:42 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:33.825 02:06:42 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@862 -- # return 0 00:11:33.825 02:06:42 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_set_options -o 30 -a 16 00:11:34.084 02:06:42 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:11:35.030 iscsi_tgt is listening. Running tests... 00:11:35.030 02:06:43 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@37 -- # echo 'iscsi_tgt is listening. Running tests...' 00:11:35.030 02:06:43 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@39 -- # timing_exit start_iscsi_tgt 00:11:35.030 02:06:43 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:35.030 02:06:43 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@10 -- # set +x 00:11:35.030 02:06:43 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@41 -- # timing_enter setup 00:11:35.030 02:06:43 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:35.030 02:06:43 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@10 -- # set +x 00:11:35.030 02:06:43 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_portal_group 1 10.0.0.1:3260 00:11:35.030 02:06:43 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@45 -- # seq 1 10 00:11:35.030 02:06:43 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@45 -- # for i in $(seq 1 $NUM_LVS) 00:11:35.030 02:06:43 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@46 -- # INITIATOR_TAG=3 00:11:35.030 02:06:43 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_initiator_group 3 ANY 10.0.0.2/32 00:11:35.288 02:06:44 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@48 -- # '[' 1 -eq 1 ']' 00:11:35.288 02:06:44 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 128 512 00:11:35.855 02:06:44 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@50 -- # malloc_bdevs='Malloc0 ' 00:11:35.855 02:06:44 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 128 512 00:11:36.113 02:06:44 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@51 -- # malloc_bdevs+=Malloc1 00:11:36.113 02:06:44 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:11:36.113 02:06:44 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@53 -- # bdev=raid0 00:11:36.113 02:06:44 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs_1 -c 1048576 00:11:36.372 02:06:45 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@58 -- # ls_guid=a1995239-ff31-4658-b25a-b161aa0d1e26 00:11:36.373 02:06:45 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@59 -- # LUNs= 00:11:36.373 02:06:45 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # seq 1 10 00:11:36.373 02:06:45 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:11:36.373 02:06:45 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u a1995239-ff31-4658-b25a-b161aa0d1e26 lbd_1 10 00:11:36.631 02:06:45 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=ddc31a96-98dd-4f92-9e4c-706d0bdc74fd 00:11:36.631 02:06:45 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='ddc31a96-98dd-4f92-9e4c-706d0bdc74fd:0 ' 00:11:36.631 02:06:45 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:11:36.631 02:06:45 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u a1995239-ff31-4658-b25a-b161aa0d1e26 lbd_2 10 00:11:36.889 02:06:45 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=c7df1094-a3a2-4efb-82b6-a85bf9a033e7 00:11:36.889 02:06:45 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='c7df1094-a3a2-4efb-82b6-a85bf9a033e7:1 ' 00:11:36.889 02:06:45 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:11:36.889 02:06:45 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u a1995239-ff31-4658-b25a-b161aa0d1e26 lbd_3 10 00:11:37.148 02:06:45 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=4ec1f050-9f49-46e9-af4a-6685048cf38e 00:11:37.148 02:06:45 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='4ec1f050-9f49-46e9-af4a-6685048cf38e:2 ' 00:11:37.148 02:06:45 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:11:37.148 02:06:45 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u a1995239-ff31-4658-b25a-b161aa0d1e26 lbd_4 10 00:11:37.407 02:06:46 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=14e7d0a1-571d-40ec-af8e-61acd0ea3764 00:11:37.407 02:06:46 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='14e7d0a1-571d-40ec-af8e-61acd0ea3764:3 ' 00:11:37.407 02:06:46 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:11:37.407 02:06:46 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u a1995239-ff31-4658-b25a-b161aa0d1e26 lbd_5 10 00:11:37.667 02:06:46 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=da890025-2c4e-407a-8339-fa5232292b30 00:11:37.667 02:06:46 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='da890025-2c4e-407a-8339-fa5232292b30:4 ' 00:11:37.667 02:06:46 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:11:37.667 02:06:46 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u a1995239-ff31-4658-b25a-b161aa0d1e26 lbd_6 10 00:11:37.926 02:06:46 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=166824f9-1db5-430f-bd2f-20942344e6ea 00:11:37.926 02:06:46 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='166824f9-1db5-430f-bd2f-20942344e6ea:5 ' 00:11:37.927 02:06:46 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:11:37.927 02:06:46 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u a1995239-ff31-4658-b25a-b161aa0d1e26 lbd_7 10 00:11:38.185 02:06:46 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=e680d41f-f6ab-4fd0-a595-624dd0808e62 00:11:38.185 02:06:46 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='e680d41f-f6ab-4fd0-a595-624dd0808e62:6 ' 00:11:38.185 02:06:46 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:11:38.185 02:06:46 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u a1995239-ff31-4658-b25a-b161aa0d1e26 lbd_8 10 00:11:38.444 02:06:46 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=8e9e8720-18d5-4991-808e-0dfdb5ac0eb0 00:11:38.444 02:06:46 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='8e9e8720-18d5-4991-808e-0dfdb5ac0eb0:7 ' 00:11:38.444 02:06:46 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:11:38.444 02:06:46 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u a1995239-ff31-4658-b25a-b161aa0d1e26 lbd_9 10 00:11:38.444 02:06:47 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=29780bec-ea3a-467a-a35a-22a18b40a8aa 00:11:38.444 02:06:47 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='29780bec-ea3a-467a-a35a-22a18b40a8aa:8 ' 00:11:38.444 02:06:47 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:11:38.444 02:06:47 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u a1995239-ff31-4658-b25a-b161aa0d1e26 lbd_10 10 00:11:38.703 02:06:47 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=d094bde5-f501-4819-8267-a26bd0f371d6 00:11:38.703 02:06:47 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='d094bde5-f501-4819-8267-a26bd0f371d6:9 ' 00:11:38.703 02:06:47 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target1 Target1_alias 'ddc31a96-98dd-4f92-9e4c-706d0bdc74fd:0 c7df1094-a3a2-4efb-82b6-a85bf9a033e7:1 4ec1f050-9f49-46e9-af4a-6685048cf38e:2 14e7d0a1-571d-40ec-af8e-61acd0ea3764:3 da890025-2c4e-407a-8339-fa5232292b30:4 166824f9-1db5-430f-bd2f-20942344e6ea:5 e680d41f-f6ab-4fd0-a595-624dd0808e62:6 8e9e8720-18d5-4991-808e-0dfdb5ac0eb0:7 29780bec-ea3a-467a-a35a-22a18b40a8aa:8 d094bde5-f501-4819-8267-a26bd0f371d6:9 ' 1:3 256 -d 00:11:38.961 02:06:47 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@45 -- # for i in $(seq 1 $NUM_LVS) 00:11:38.961 02:06:47 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@46 -- # INITIATOR_TAG=4 00:11:38.961 02:06:47 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_initiator_group 4 ANY 10.0.0.2/32 00:11:39.219 02:06:47 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@48 -- # '[' 2 -eq 1 ']' 00:11:39.219 02:06:47 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 128 512 00:11:39.478 02:06:48 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@56 -- # bdev=Malloc2 00:11:39.478 02:06:48 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore Malloc2 lvs_2 -c 1048576 00:11:39.737 02:06:48 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@58 -- # ls_guid=28bbc9db-d028-4a03-8187-996cf6151265 00:11:39.737 02:06:48 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@59 -- # LUNs= 00:11:39.737 02:06:48 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # seq 1 10 00:11:39.737 02:06:48 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:11:39.737 02:06:48 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 28bbc9db-d028-4a03-8187-996cf6151265 lbd_1 10 00:11:39.995 02:06:48 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=3c1d1f11-e81e-49c4-b40d-73f6fc87b81d 00:11:39.996 02:06:48 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='3c1d1f11-e81e-49c4-b40d-73f6fc87b81d:0 ' 00:11:39.996 02:06:48 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:11:39.996 02:06:48 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 28bbc9db-d028-4a03-8187-996cf6151265 lbd_2 10 00:11:40.254 02:06:48 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=69d0d439-13af-42dd-9baf-9ca47562e781 00:11:40.254 02:06:48 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='69d0d439-13af-42dd-9baf-9ca47562e781:1 ' 00:11:40.254 02:06:48 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:11:40.254 02:06:48 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 28bbc9db-d028-4a03-8187-996cf6151265 lbd_3 10 00:11:40.514 02:06:49 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=0007ea8d-1b53-4fe6-bd1f-df676e671ce7 00:11:40.514 02:06:49 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='0007ea8d-1b53-4fe6-bd1f-df676e671ce7:2 ' 00:11:40.514 02:06:49 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:11:40.514 02:06:49 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 28bbc9db-d028-4a03-8187-996cf6151265 lbd_4 10 00:11:40.772 02:06:49 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=3f74d5e9-9680-4f26-9223-5166c9d8e800 00:11:40.772 02:06:49 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='3f74d5e9-9680-4f26-9223-5166c9d8e800:3 ' 00:11:40.772 02:06:49 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:11:40.772 02:06:49 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 28bbc9db-d028-4a03-8187-996cf6151265 lbd_5 10 00:11:41.031 02:06:49 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=24d86948-32fb-4d04-9bf7-9504864025b1 00:11:41.031 02:06:49 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='24d86948-32fb-4d04-9bf7-9504864025b1:4 ' 00:11:41.031 02:06:49 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:11:41.031 02:06:49 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 28bbc9db-d028-4a03-8187-996cf6151265 lbd_6 10 00:11:41.289 02:06:49 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=8b355d1a-6490-44fa-bb12-62945aa7d109 00:11:41.289 02:06:49 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='8b355d1a-6490-44fa-bb12-62945aa7d109:5 ' 00:11:41.289 02:06:49 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:11:41.289 02:06:49 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 28bbc9db-d028-4a03-8187-996cf6151265 lbd_7 10 00:11:41.289 02:06:50 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=bc0bd256-67a7-476c-a70b-bad951e1ff13 00:11:41.289 02:06:50 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='bc0bd256-67a7-476c-a70b-bad951e1ff13:6 ' 00:11:41.289 02:06:50 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:11:41.289 02:06:50 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 28bbc9db-d028-4a03-8187-996cf6151265 lbd_8 10 00:11:41.547 02:06:50 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=974b1b49-21fe-45ed-a04d-46a8863c9b0e 00:11:41.547 02:06:50 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='974b1b49-21fe-45ed-a04d-46a8863c9b0e:7 ' 00:11:41.547 02:06:50 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:11:41.547 02:06:50 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 28bbc9db-d028-4a03-8187-996cf6151265 lbd_9 10 00:11:41.805 02:06:50 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=c4c9b6c7-3b73-4ebc-aefd-f5f45b0798dc 00:11:41.805 02:06:50 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='c4c9b6c7-3b73-4ebc-aefd-f5f45b0798dc:8 ' 00:11:41.805 02:06:50 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:11:41.805 02:06:50 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 28bbc9db-d028-4a03-8187-996cf6151265 lbd_10 10 00:11:42.064 02:06:50 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=86c3964d-9ce2-4415-a24f-7c47e59cc24c 00:11:42.064 02:06:50 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='86c3964d-9ce2-4415-a24f-7c47e59cc24c:9 ' 00:11:42.064 02:06:50 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target2 Target2_alias '3c1d1f11-e81e-49c4-b40d-73f6fc87b81d:0 69d0d439-13af-42dd-9baf-9ca47562e781:1 0007ea8d-1b53-4fe6-bd1f-df676e671ce7:2 3f74d5e9-9680-4f26-9223-5166c9d8e800:3 24d86948-32fb-4d04-9bf7-9504864025b1:4 8b355d1a-6490-44fa-bb12-62945aa7d109:5 bc0bd256-67a7-476c-a70b-bad951e1ff13:6 974b1b49-21fe-45ed-a04d-46a8863c9b0e:7 c4c9b6c7-3b73-4ebc-aefd-f5f45b0798dc:8 86c3964d-9ce2-4415-a24f-7c47e59cc24c:9 ' 1:4 256 -d 00:11:42.323 02:06:50 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@45 -- # for i in $(seq 1 $NUM_LVS) 00:11:42.323 02:06:50 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@46 -- # INITIATOR_TAG=5 00:11:42.323 02:06:50 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_initiator_group 5 ANY 10.0.0.2/32 00:11:42.582 02:06:51 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@48 -- # '[' 3 -eq 1 ']' 00:11:42.582 02:06:51 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 128 512 00:11:42.841 02:06:51 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@56 -- # bdev=Malloc3 00:11:42.841 02:06:51 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore Malloc3 lvs_3 -c 1048576 00:11:43.100 02:06:51 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@58 -- # ls_guid=b08f54cc-081e-40ad-bb28-6c27eaad67ae 00:11:43.100 02:06:51 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@59 -- # LUNs= 00:11:43.100 02:06:51 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # seq 1 10 00:11:43.100 02:06:51 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:11:43.100 02:06:51 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u b08f54cc-081e-40ad-bb28-6c27eaad67ae lbd_1 10 00:11:43.359 02:06:52 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=898f3094-4830-4e4f-840b-23db1b713e6d 00:11:43.359 02:06:52 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='898f3094-4830-4e4f-840b-23db1b713e6d:0 ' 00:11:43.359 02:06:52 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:11:43.359 02:06:52 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u b08f54cc-081e-40ad-bb28-6c27eaad67ae lbd_2 10 00:11:43.618 02:06:52 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=b059f7d4-47d3-423c-92f7-046b3e05d516 00:11:43.619 02:06:52 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='b059f7d4-47d3-423c-92f7-046b3e05d516:1 ' 00:11:43.619 02:06:52 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:11:43.619 02:06:52 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u b08f54cc-081e-40ad-bb28-6c27eaad67ae lbd_3 10 00:11:43.878 02:06:52 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=2713b7ca-f580-4249-8316-3b703df011ce 00:11:43.878 02:06:52 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='2713b7ca-f580-4249-8316-3b703df011ce:2 ' 00:11:43.878 02:06:52 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:11:43.878 02:06:52 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u b08f54cc-081e-40ad-bb28-6c27eaad67ae lbd_4 10 00:11:44.137 02:06:52 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=341400bb-9dff-46b0-af23-5c7bb415834d 00:11:44.137 02:06:52 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='341400bb-9dff-46b0-af23-5c7bb415834d:3 ' 00:11:44.137 02:06:52 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:11:44.137 02:06:52 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u b08f54cc-081e-40ad-bb28-6c27eaad67ae lbd_5 10 00:11:44.396 02:06:52 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=2e3751d1-b6e6-4428-b044-2f08488f4859 00:11:44.396 02:06:52 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='2e3751d1-b6e6-4428-b044-2f08488f4859:4 ' 00:11:44.396 02:06:52 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:11:44.396 02:06:52 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u b08f54cc-081e-40ad-bb28-6c27eaad67ae lbd_6 10 00:11:44.655 02:06:53 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=83ff1a1a-c8ad-4f88-8271-fcab05dd399d 00:11:44.655 02:06:53 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='83ff1a1a-c8ad-4f88-8271-fcab05dd399d:5 ' 00:11:44.655 02:06:53 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:11:44.655 02:06:53 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u b08f54cc-081e-40ad-bb28-6c27eaad67ae lbd_7 10 00:11:44.914 02:06:53 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=240a2e99-7b17-44ac-87a5-6093eaee0b63 00:11:44.914 02:06:53 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='240a2e99-7b17-44ac-87a5-6093eaee0b63:6 ' 00:11:44.914 02:06:53 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:11:44.914 02:06:53 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u b08f54cc-081e-40ad-bb28-6c27eaad67ae lbd_8 10 00:11:44.914 02:06:53 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=c300fc54-fd69-4771-8d41-432e120e8d61 00:11:44.914 02:06:53 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='c300fc54-fd69-4771-8d41-432e120e8d61:7 ' 00:11:44.914 02:06:53 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:11:44.914 02:06:53 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u b08f54cc-081e-40ad-bb28-6c27eaad67ae lbd_9 10 00:11:45.172 02:06:53 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=fb7e86c8-be74-4a49-8412-78792555df24 00:11:45.172 02:06:53 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='fb7e86c8-be74-4a49-8412-78792555df24:8 ' 00:11:45.172 02:06:53 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:11:45.172 02:06:53 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u b08f54cc-081e-40ad-bb28-6c27eaad67ae lbd_10 10 00:11:45.431 02:06:54 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=cc80a66a-1f49-4ee6-a69f-b86a4e6c872b 00:11:45.431 02:06:54 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='cc80a66a-1f49-4ee6-a69f-b86a4e6c872b:9 ' 00:11:45.431 02:06:54 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target3 Target3_alias '898f3094-4830-4e4f-840b-23db1b713e6d:0 b059f7d4-47d3-423c-92f7-046b3e05d516:1 2713b7ca-f580-4249-8316-3b703df011ce:2 341400bb-9dff-46b0-af23-5c7bb415834d:3 2e3751d1-b6e6-4428-b044-2f08488f4859:4 83ff1a1a-c8ad-4f88-8271-fcab05dd399d:5 240a2e99-7b17-44ac-87a5-6093eaee0b63:6 c300fc54-fd69-4771-8d41-432e120e8d61:7 fb7e86c8-be74-4a49-8412-78792555df24:8 cc80a66a-1f49-4ee6-a69f-b86a4e6c872b:9 ' 1:5 256 -d 00:11:45.690 02:06:54 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@45 -- # for i in $(seq 1 $NUM_LVS) 00:11:45.690 02:06:54 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@46 -- # INITIATOR_TAG=6 00:11:45.690 02:06:54 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_initiator_group 6 ANY 10.0.0.2/32 00:11:45.950 02:06:54 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@48 -- # '[' 4 -eq 1 ']' 00:11:45.951 02:06:54 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 128 512 00:11:46.210 02:06:54 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@56 -- # bdev=Malloc4 00:11:46.210 02:06:54 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore Malloc4 lvs_4 -c 1048576 00:11:46.469 02:06:55 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@58 -- # ls_guid=0b5251df-6f46-419d-8c0a-2c0c7b4a991c 00:11:46.469 02:06:55 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@59 -- # LUNs= 00:11:46.469 02:06:55 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # seq 1 10 00:11:46.469 02:06:55 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:11:46.469 02:06:55 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 0b5251df-6f46-419d-8c0a-2c0c7b4a991c lbd_1 10 00:11:46.469 02:06:55 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=aa9cf09a-60d5-4bb1-98a4-74dd4fe6e7f2 00:11:46.469 02:06:55 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='aa9cf09a-60d5-4bb1-98a4-74dd4fe6e7f2:0 ' 00:11:46.469 02:06:55 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:11:46.469 02:06:55 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 0b5251df-6f46-419d-8c0a-2c0c7b4a991c lbd_2 10 00:11:46.728 02:06:55 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=bab902fd-83d0-4686-8ac3-dc0c1f37efef 00:11:46.728 02:06:55 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='bab902fd-83d0-4686-8ac3-dc0c1f37efef:1 ' 00:11:46.728 02:06:55 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:11:46.728 02:06:55 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 0b5251df-6f46-419d-8c0a-2c0c7b4a991c lbd_3 10 00:11:47.313 02:06:55 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=9cbc3110-e245-4987-90ed-dc30cf2005bb 00:11:47.313 02:06:55 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='9cbc3110-e245-4987-90ed-dc30cf2005bb:2 ' 00:11:47.313 02:06:55 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:11:47.313 02:06:55 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 0b5251df-6f46-419d-8c0a-2c0c7b4a991c lbd_4 10 00:11:47.313 02:06:55 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=76608daa-6843-4363-8397-663446d1eddf 00:11:47.313 02:06:55 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='76608daa-6843-4363-8397-663446d1eddf:3 ' 00:11:47.313 02:06:55 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:11:47.313 02:06:55 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 0b5251df-6f46-419d-8c0a-2c0c7b4a991c lbd_5 10 00:11:47.586 02:06:56 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=e7e78172-1af4-46f2-bd74-4f52a3707506 00:11:47.586 02:06:56 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='e7e78172-1af4-46f2-bd74-4f52a3707506:4 ' 00:11:47.586 02:06:56 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:11:47.586 02:06:56 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 0b5251df-6f46-419d-8c0a-2c0c7b4a991c lbd_6 10 00:11:47.845 02:06:56 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=e2ff152b-a2bc-4d99-89e3-5d2a03a1ff07 00:11:47.845 02:06:56 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='e2ff152b-a2bc-4d99-89e3-5d2a03a1ff07:5 ' 00:11:47.845 02:06:56 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:11:47.845 02:06:56 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 0b5251df-6f46-419d-8c0a-2c0c7b4a991c lbd_7 10 00:11:48.104 02:06:56 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=e6faa045-f917-4779-bc0c-869833b7c844 00:11:48.104 02:06:56 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='e6faa045-f917-4779-bc0c-869833b7c844:6 ' 00:11:48.104 02:06:56 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:11:48.104 02:06:56 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 0b5251df-6f46-419d-8c0a-2c0c7b4a991c lbd_8 10 00:11:48.104 02:06:56 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=31ad2584-1a19-4eb2-83df-35a1978e3b28 00:11:48.104 02:06:56 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='31ad2584-1a19-4eb2-83df-35a1978e3b28:7 ' 00:11:48.104 02:06:56 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:11:48.104 02:06:56 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 0b5251df-6f46-419d-8c0a-2c0c7b4a991c lbd_9 10 00:11:48.362 02:06:57 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=b6a32eff-88e1-4421-8b97-e09f5afea951 00:11:48.362 02:06:57 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='b6a32eff-88e1-4421-8b97-e09f5afea951:8 ' 00:11:48.362 02:06:57 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:11:48.362 02:06:57 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 0b5251df-6f46-419d-8c0a-2c0c7b4a991c lbd_10 10 00:11:48.621 02:06:57 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=ed10dee5-7284-45be-b1f0-00f31a6eed65 00:11:48.621 02:06:57 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='ed10dee5-7284-45be-b1f0-00f31a6eed65:9 ' 00:11:48.621 02:06:57 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target4 Target4_alias 'aa9cf09a-60d5-4bb1-98a4-74dd4fe6e7f2:0 bab902fd-83d0-4686-8ac3-dc0c1f37efef:1 9cbc3110-e245-4987-90ed-dc30cf2005bb:2 76608daa-6843-4363-8397-663446d1eddf:3 e7e78172-1af4-46f2-bd74-4f52a3707506:4 e2ff152b-a2bc-4d99-89e3-5d2a03a1ff07:5 e6faa045-f917-4779-bc0c-869833b7c844:6 31ad2584-1a19-4eb2-83df-35a1978e3b28:7 b6a32eff-88e1-4421-8b97-e09f5afea951:8 ed10dee5-7284-45be-b1f0-00f31a6eed65:9 ' 1:6 256 -d 00:11:48.880 02:06:57 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@45 -- # for i in $(seq 1 $NUM_LVS) 00:11:48.880 02:06:57 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@46 -- # INITIATOR_TAG=7 00:11:48.880 02:06:57 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_initiator_group 7 ANY 10.0.0.2/32 00:11:49.139 02:06:57 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@48 -- # '[' 5 -eq 1 ']' 00:11:49.139 02:06:57 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 128 512 00:11:49.398 02:06:58 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@56 -- # bdev=Malloc5 00:11:49.398 02:06:58 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore Malloc5 lvs_5 -c 1048576 00:11:49.657 02:06:58 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@58 -- # ls_guid=2b94931c-9204-4951-ae7e-2ddec0d56d2b 00:11:49.657 02:06:58 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@59 -- # LUNs= 00:11:49.657 02:06:58 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # seq 1 10 00:11:49.657 02:06:58 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:11:49.657 02:06:58 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 2b94931c-9204-4951-ae7e-2ddec0d56d2b lbd_1 10 00:11:49.915 02:06:58 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=6fa0c23d-1e0e-429f-bbd4-8330b8f24b22 00:11:49.915 02:06:58 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='6fa0c23d-1e0e-429f-bbd4-8330b8f24b22:0 ' 00:11:49.915 02:06:58 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:11:49.915 02:06:58 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 2b94931c-9204-4951-ae7e-2ddec0d56d2b lbd_2 10 00:11:50.174 02:06:58 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=5b0fe30b-fc79-43fe-8483-c183c6d6bdb5 00:11:50.174 02:06:58 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='5b0fe30b-fc79-43fe-8483-c183c6d6bdb5:1 ' 00:11:50.174 02:06:58 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:11:50.174 02:06:58 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 2b94931c-9204-4951-ae7e-2ddec0d56d2b lbd_3 10 00:11:50.432 02:06:59 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=412c6230-4fad-4877-bd53-5440b705d537 00:11:50.432 02:06:59 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='412c6230-4fad-4877-bd53-5440b705d537:2 ' 00:11:50.432 02:06:59 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:11:50.432 02:06:59 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 2b94931c-9204-4951-ae7e-2ddec0d56d2b lbd_4 10 00:11:50.691 02:06:59 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=4f5113a8-5988-4f59-8212-efdc65935a82 00:11:50.691 02:06:59 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='4f5113a8-5988-4f59-8212-efdc65935a82:3 ' 00:11:50.691 02:06:59 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:11:50.691 02:06:59 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 2b94931c-9204-4951-ae7e-2ddec0d56d2b lbd_5 10 00:11:50.691 02:06:59 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=4c627901-e3ad-4ec3-be4b-d2179fe76d52 00:11:50.691 02:06:59 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='4c627901-e3ad-4ec3-be4b-d2179fe76d52:4 ' 00:11:50.691 02:06:59 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:11:50.691 02:06:59 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 2b94931c-9204-4951-ae7e-2ddec0d56d2b lbd_6 10 00:11:50.950 02:06:59 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=f78fb5e5-317c-48fe-a23c-eb9f9630afed 00:11:50.950 02:06:59 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='f78fb5e5-317c-48fe-a23c-eb9f9630afed:5 ' 00:11:50.950 02:06:59 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:11:50.950 02:06:59 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 2b94931c-9204-4951-ae7e-2ddec0d56d2b lbd_7 10 00:11:51.208 02:06:59 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=8a09a218-2541-4d9d-b7bc-e442a9d793d5 00:11:51.208 02:06:59 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='8a09a218-2541-4d9d-b7bc-e442a9d793d5:6 ' 00:11:51.208 02:06:59 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:11:51.208 02:06:59 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 2b94931c-9204-4951-ae7e-2ddec0d56d2b lbd_8 10 00:11:51.467 02:07:00 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=164b6c93-fe9b-4b03-bca3-062b86632fe5 00:11:51.467 02:07:00 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='164b6c93-fe9b-4b03-bca3-062b86632fe5:7 ' 00:11:51.467 02:07:00 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:11:51.467 02:07:00 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 2b94931c-9204-4951-ae7e-2ddec0d56d2b lbd_9 10 00:11:51.726 02:07:00 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=291ec6d6-ccaa-40b7-87f4-3e80e747bf7a 00:11:51.726 02:07:00 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='291ec6d6-ccaa-40b7-87f4-3e80e747bf7a:8 ' 00:11:51.726 02:07:00 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:11:51.726 02:07:00 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 2b94931c-9204-4951-ae7e-2ddec0d56d2b lbd_10 10 00:11:51.726 02:07:00 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=60a3b5b1-9b68-4d4d-87c4-cdc084380a97 00:11:51.726 02:07:00 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='60a3b5b1-9b68-4d4d-87c4-cdc084380a97:9 ' 00:11:51.726 02:07:00 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target5 Target5_alias '6fa0c23d-1e0e-429f-bbd4-8330b8f24b22:0 5b0fe30b-fc79-43fe-8483-c183c6d6bdb5:1 412c6230-4fad-4877-bd53-5440b705d537:2 4f5113a8-5988-4f59-8212-efdc65935a82:3 4c627901-e3ad-4ec3-be4b-d2179fe76d52:4 f78fb5e5-317c-48fe-a23c-eb9f9630afed:5 8a09a218-2541-4d9d-b7bc-e442a9d793d5:6 164b6c93-fe9b-4b03-bca3-062b86632fe5:7 291ec6d6-ccaa-40b7-87f4-3e80e747bf7a:8 60a3b5b1-9b68-4d4d-87c4-cdc084380a97:9 ' 1:7 256 -d 00:11:51.984 02:07:00 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@45 -- # for i in $(seq 1 $NUM_LVS) 00:11:51.984 02:07:00 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@46 -- # INITIATOR_TAG=8 00:11:51.984 02:07:00 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_initiator_group 8 ANY 10.0.0.2/32 00:11:52.243 02:07:00 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@48 -- # '[' 6 -eq 1 ']' 00:11:52.243 02:07:00 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 128 512 00:11:52.501 02:07:01 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@56 -- # bdev=Malloc6 00:11:52.501 02:07:01 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore Malloc6 lvs_6 -c 1048576 00:11:52.760 02:07:01 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@58 -- # ls_guid=d6287d43-568c-4772-befe-4ca874fec7de 00:11:52.760 02:07:01 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@59 -- # LUNs= 00:11:52.760 02:07:01 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # seq 1 10 00:11:52.760 02:07:01 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:11:52.760 02:07:01 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u d6287d43-568c-4772-befe-4ca874fec7de lbd_1 10 00:11:53.019 02:07:01 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=380c1ee5-4f4a-452e-b564-065603c60dd9 00:11:53.019 02:07:01 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='380c1ee5-4f4a-452e-b564-065603c60dd9:0 ' 00:11:53.019 02:07:01 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:11:53.019 02:07:01 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u d6287d43-568c-4772-befe-4ca874fec7de lbd_2 10 00:11:53.277 02:07:01 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=292a6552-9de5-47dc-ae5d-5b11b60f82d1 00:11:53.277 02:07:01 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='292a6552-9de5-47dc-ae5d-5b11b60f82d1:1 ' 00:11:53.277 02:07:01 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:11:53.277 02:07:01 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u d6287d43-568c-4772-befe-4ca874fec7de lbd_3 10 00:11:53.536 02:07:02 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=ec50153c-39c1-4d9e-8891-1fc6866c14d6 00:11:53.536 02:07:02 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='ec50153c-39c1-4d9e-8891-1fc6866c14d6:2 ' 00:11:53.536 02:07:02 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:11:53.536 02:07:02 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u d6287d43-568c-4772-befe-4ca874fec7de lbd_4 10 00:11:53.794 02:07:02 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=9f3e160c-d268-4771-97f4-0993edd42052 00:11:53.794 02:07:02 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='9f3e160c-d268-4771-97f4-0993edd42052:3 ' 00:11:53.794 02:07:02 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:11:53.794 02:07:02 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u d6287d43-568c-4772-befe-4ca874fec7de lbd_5 10 00:11:54.052 02:07:02 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=97b224e5-f10e-41d8-8412-aaa44f8014aa 00:11:54.052 02:07:02 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='97b224e5-f10e-41d8-8412-aaa44f8014aa:4 ' 00:11:54.052 02:07:02 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:11:54.052 02:07:02 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u d6287d43-568c-4772-befe-4ca874fec7de lbd_6 10 00:11:54.311 02:07:02 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=59289cb9-2d76-47ef-94ed-db598dfdf112 00:11:54.311 02:07:02 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='59289cb9-2d76-47ef-94ed-db598dfdf112:5 ' 00:11:54.311 02:07:02 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:11:54.311 02:07:02 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u d6287d43-568c-4772-befe-4ca874fec7de lbd_7 10 00:11:54.569 02:07:03 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=55d0ebec-5c8f-4554-95be-708520fae0e4 00:11:54.569 02:07:03 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='55d0ebec-5c8f-4554-95be-708520fae0e4:6 ' 00:11:54.569 02:07:03 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:11:54.569 02:07:03 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u d6287d43-568c-4772-befe-4ca874fec7de lbd_8 10 00:11:54.827 02:07:03 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=c6442484-8433-45d3-947d-86cd4775983a 00:11:54.827 02:07:03 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='c6442484-8433-45d3-947d-86cd4775983a:7 ' 00:11:54.827 02:07:03 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:11:54.827 02:07:03 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u d6287d43-568c-4772-befe-4ca874fec7de lbd_9 10 00:11:54.827 02:07:03 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=58e95d86-6bca-47ef-b287-af087c5aba87 00:11:54.827 02:07:03 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='58e95d86-6bca-47ef-b287-af087c5aba87:8 ' 00:11:54.827 02:07:03 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:11:54.827 02:07:03 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u d6287d43-568c-4772-befe-4ca874fec7de lbd_10 10 00:11:55.086 02:07:03 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=7249b5aa-e34d-48e0-ba1c-bab7fedcc8ca 00:11:55.086 02:07:03 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='7249b5aa-e34d-48e0-ba1c-bab7fedcc8ca:9 ' 00:11:55.086 02:07:03 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target6 Target6_alias '380c1ee5-4f4a-452e-b564-065603c60dd9:0 292a6552-9de5-47dc-ae5d-5b11b60f82d1:1 ec50153c-39c1-4d9e-8891-1fc6866c14d6:2 9f3e160c-d268-4771-97f4-0993edd42052:3 97b224e5-f10e-41d8-8412-aaa44f8014aa:4 59289cb9-2d76-47ef-94ed-db598dfdf112:5 55d0ebec-5c8f-4554-95be-708520fae0e4:6 c6442484-8433-45d3-947d-86cd4775983a:7 58e95d86-6bca-47ef-b287-af087c5aba87:8 7249b5aa-e34d-48e0-ba1c-bab7fedcc8ca:9 ' 1:8 256 -d 00:11:55.344 02:07:03 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@45 -- # for i in $(seq 1 $NUM_LVS) 00:11:55.344 02:07:03 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@46 -- # INITIATOR_TAG=9 00:11:55.344 02:07:03 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_initiator_group 9 ANY 10.0.0.2/32 00:11:55.602 02:07:04 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@48 -- # '[' 7 -eq 1 ']' 00:11:55.602 02:07:04 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 128 512 00:11:55.860 02:07:04 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@56 -- # bdev=Malloc7 00:11:55.860 02:07:04 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore Malloc7 lvs_7 -c 1048576 00:11:56.119 02:07:04 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@58 -- # ls_guid=2a978a18-fbfe-4eee-85d5-40bcc8eec653 00:11:56.119 02:07:04 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@59 -- # LUNs= 00:11:56.119 02:07:04 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # seq 1 10 00:11:56.119 02:07:04 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:11:56.119 02:07:04 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 2a978a18-fbfe-4eee-85d5-40bcc8eec653 lbd_1 10 00:11:56.377 02:07:05 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=65c51632-1007-44d9-a435-0a9ef7526e91 00:11:56.377 02:07:05 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='65c51632-1007-44d9-a435-0a9ef7526e91:0 ' 00:11:56.377 02:07:05 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:11:56.377 02:07:05 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 2a978a18-fbfe-4eee-85d5-40bcc8eec653 lbd_2 10 00:11:56.636 02:07:05 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=0b39bad4-1aad-4cb8-a44d-34086dc40853 00:11:56.636 02:07:05 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='0b39bad4-1aad-4cb8-a44d-34086dc40853:1 ' 00:11:56.636 02:07:05 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:11:56.636 02:07:05 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 2a978a18-fbfe-4eee-85d5-40bcc8eec653 lbd_3 10 00:11:56.894 02:07:05 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=2b451f09-0189-404f-9d6b-35f50321bff0 00:11:56.894 02:07:05 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='2b451f09-0189-404f-9d6b-35f50321bff0:2 ' 00:11:56.894 02:07:05 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:11:56.894 02:07:05 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 2a978a18-fbfe-4eee-85d5-40bcc8eec653 lbd_4 10 00:11:57.152 02:07:05 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=821f28e2-2cf1-47e8-be8a-8cf661518147 00:11:57.152 02:07:05 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='821f28e2-2cf1-47e8-be8a-8cf661518147:3 ' 00:11:57.152 02:07:05 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:11:57.152 02:07:05 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 2a978a18-fbfe-4eee-85d5-40bcc8eec653 lbd_5 10 00:11:57.152 02:07:05 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=5f6a15ee-1043-4fc0-816b-9fe2ed661488 00:11:57.152 02:07:05 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='5f6a15ee-1043-4fc0-816b-9fe2ed661488:4 ' 00:11:57.152 02:07:05 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:11:57.152 02:07:05 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 2a978a18-fbfe-4eee-85d5-40bcc8eec653 lbd_6 10 00:11:57.410 02:07:06 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=08c596f3-34b4-4a4f-80ce-001b48e086af 00:11:57.410 02:07:06 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='08c596f3-34b4-4a4f-80ce-001b48e086af:5 ' 00:11:57.410 02:07:06 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:11:57.410 02:07:06 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 2a978a18-fbfe-4eee-85d5-40bcc8eec653 lbd_7 10 00:11:57.668 02:07:06 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=4ad8bff2-e4b3-42b7-8cfd-870c13841341 00:11:57.668 02:07:06 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='4ad8bff2-e4b3-42b7-8cfd-870c13841341:6 ' 00:11:57.668 02:07:06 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:11:57.668 02:07:06 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 2a978a18-fbfe-4eee-85d5-40bcc8eec653 lbd_8 10 00:11:57.926 02:07:06 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=4b38ad3a-bb95-4a1b-95ef-8960f191f7ad 00:11:57.926 02:07:06 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='4b38ad3a-bb95-4a1b-95ef-8960f191f7ad:7 ' 00:11:57.926 02:07:06 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:11:57.926 02:07:06 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 2a978a18-fbfe-4eee-85d5-40bcc8eec653 lbd_9 10 00:11:58.184 02:07:06 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=08590e88-4a40-48ce-a149-8304d3948cc9 00:11:58.184 02:07:06 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='08590e88-4a40-48ce-a149-8304d3948cc9:8 ' 00:11:58.184 02:07:06 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:11:58.184 02:07:06 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 2a978a18-fbfe-4eee-85d5-40bcc8eec653 lbd_10 10 00:11:58.443 02:07:07 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=7a8ad085-e5ac-416a-b276-addffed28afe 00:11:58.443 02:07:07 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='7a8ad085-e5ac-416a-b276-addffed28afe:9 ' 00:11:58.443 02:07:07 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target7 Target7_alias '65c51632-1007-44d9-a435-0a9ef7526e91:0 0b39bad4-1aad-4cb8-a44d-34086dc40853:1 2b451f09-0189-404f-9d6b-35f50321bff0:2 821f28e2-2cf1-47e8-be8a-8cf661518147:3 5f6a15ee-1043-4fc0-816b-9fe2ed661488:4 08c596f3-34b4-4a4f-80ce-001b48e086af:5 4ad8bff2-e4b3-42b7-8cfd-870c13841341:6 4b38ad3a-bb95-4a1b-95ef-8960f191f7ad:7 08590e88-4a40-48ce-a149-8304d3948cc9:8 7a8ad085-e5ac-416a-b276-addffed28afe:9 ' 1:9 256 -d 00:11:58.701 02:07:07 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@45 -- # for i in $(seq 1 $NUM_LVS) 00:11:58.701 02:07:07 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@46 -- # INITIATOR_TAG=10 00:11:58.701 02:07:07 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_initiator_group 10 ANY 10.0.0.2/32 00:11:58.701 02:07:07 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@48 -- # '[' 8 -eq 1 ']' 00:11:58.701 02:07:07 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 128 512 00:11:59.268 02:07:07 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@56 -- # bdev=Malloc8 00:11:59.268 02:07:07 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore Malloc8 lvs_8 -c 1048576 00:11:59.268 02:07:07 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@58 -- # ls_guid=58c7146b-1028-4652-9e4b-1f6d2653606d 00:11:59.268 02:07:08 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@59 -- # LUNs= 00:11:59.268 02:07:08 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # seq 1 10 00:11:59.268 02:07:08 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:11:59.268 02:07:08 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 58c7146b-1028-4652-9e4b-1f6d2653606d lbd_1 10 00:11:59.526 02:07:08 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=dd41541a-3cf8-4a29-bf7d-bc26d52fe609 00:11:59.526 02:07:08 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='dd41541a-3cf8-4a29-bf7d-bc26d52fe609:0 ' 00:11:59.526 02:07:08 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:11:59.526 02:07:08 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 58c7146b-1028-4652-9e4b-1f6d2653606d lbd_2 10 00:11:59.784 02:07:08 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=f26ad8bc-cfde-46b8-985e-2d343f172b39 00:11:59.784 02:07:08 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='f26ad8bc-cfde-46b8-985e-2d343f172b39:1 ' 00:11:59.784 02:07:08 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:11:59.784 02:07:08 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 58c7146b-1028-4652-9e4b-1f6d2653606d lbd_3 10 00:12:00.042 02:07:08 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=dbb840cf-5dd6-4936-b9d3-8bfe0fb5e133 00:12:00.042 02:07:08 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='dbb840cf-5dd6-4936-b9d3-8bfe0fb5e133:2 ' 00:12:00.042 02:07:08 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:12:00.042 02:07:08 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 58c7146b-1028-4652-9e4b-1f6d2653606d lbd_4 10 00:12:00.300 02:07:08 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=64e0d61f-21b5-4b91-89eb-1a945019574f 00:12:00.300 02:07:08 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='64e0d61f-21b5-4b91-89eb-1a945019574f:3 ' 00:12:00.300 02:07:08 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:12:00.300 02:07:08 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 58c7146b-1028-4652-9e4b-1f6d2653606d lbd_5 10 00:12:00.558 02:07:09 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=9d1e80e0-69a1-4fb6-a4b1-a3e8b400b3bc 00:12:00.558 02:07:09 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='9d1e80e0-69a1-4fb6-a4b1-a3e8b400b3bc:4 ' 00:12:00.558 02:07:09 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:12:00.558 02:07:09 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 58c7146b-1028-4652-9e4b-1f6d2653606d lbd_6 10 00:12:00.816 02:07:09 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=06a8eff8-b245-4a9b-b429-223fda7bdc90 00:12:00.816 02:07:09 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='06a8eff8-b245-4a9b-b429-223fda7bdc90:5 ' 00:12:00.816 02:07:09 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:12:00.816 02:07:09 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 58c7146b-1028-4652-9e4b-1f6d2653606d lbd_7 10 00:12:00.816 02:07:09 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=e7836808-4273-4666-a8e8-c35f91b154ab 00:12:00.816 02:07:09 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='e7836808-4273-4666-a8e8-c35f91b154ab:6 ' 00:12:00.816 02:07:09 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:12:00.816 02:07:09 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 58c7146b-1028-4652-9e4b-1f6d2653606d lbd_8 10 00:12:01.074 02:07:09 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=a6617b94-609e-45f2-9c4b-404dc20a3ace 00:12:01.074 02:07:09 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='a6617b94-609e-45f2-9c4b-404dc20a3ace:7 ' 00:12:01.074 02:07:09 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:12:01.074 02:07:09 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 58c7146b-1028-4652-9e4b-1f6d2653606d lbd_9 10 00:12:01.332 02:07:10 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=74c4ff92-16cf-4605-a3ae-807e97c0c08d 00:12:01.332 02:07:10 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='74c4ff92-16cf-4605-a3ae-807e97c0c08d:8 ' 00:12:01.332 02:07:10 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:12:01.332 02:07:10 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 58c7146b-1028-4652-9e4b-1f6d2653606d lbd_10 10 00:12:01.590 02:07:10 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=30c40915-34a9-4011-9bb7-78621cb4dde8 00:12:01.590 02:07:10 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='30c40915-34a9-4011-9bb7-78621cb4dde8:9 ' 00:12:01.591 02:07:10 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target8 Target8_alias 'dd41541a-3cf8-4a29-bf7d-bc26d52fe609:0 f26ad8bc-cfde-46b8-985e-2d343f172b39:1 dbb840cf-5dd6-4936-b9d3-8bfe0fb5e133:2 64e0d61f-21b5-4b91-89eb-1a945019574f:3 9d1e80e0-69a1-4fb6-a4b1-a3e8b400b3bc:4 06a8eff8-b245-4a9b-b429-223fda7bdc90:5 e7836808-4273-4666-a8e8-c35f91b154ab:6 a6617b94-609e-45f2-9c4b-404dc20a3ace:7 74c4ff92-16cf-4605-a3ae-807e97c0c08d:8 30c40915-34a9-4011-9bb7-78621cb4dde8:9 ' 1:10 256 -d 00:12:01.849 02:07:10 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@45 -- # for i in $(seq 1 $NUM_LVS) 00:12:01.849 02:07:10 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@46 -- # INITIATOR_TAG=11 00:12:01.849 02:07:10 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_initiator_group 11 ANY 10.0.0.2/32 00:12:02.108 02:07:10 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@48 -- # '[' 9 -eq 1 ']' 00:12:02.108 02:07:10 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 128 512 00:12:02.369 02:07:11 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@56 -- # bdev=Malloc9 00:12:02.369 02:07:11 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore Malloc9 lvs_9 -c 1048576 00:12:02.628 02:07:11 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@58 -- # ls_guid=96fc7cb2-e5d0-40d7-a012-8ddafb380ca3 00:12:02.628 02:07:11 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@59 -- # LUNs= 00:12:02.628 02:07:11 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # seq 1 10 00:12:02.628 02:07:11 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:12:02.628 02:07:11 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 96fc7cb2-e5d0-40d7-a012-8ddafb380ca3 lbd_1 10 00:12:02.886 02:07:11 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=b45f4751-28df-473f-a84c-df58c2fd633b 00:12:02.886 02:07:11 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='b45f4751-28df-473f-a84c-df58c2fd633b:0 ' 00:12:02.886 02:07:11 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:12:02.886 02:07:11 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 96fc7cb2-e5d0-40d7-a012-8ddafb380ca3 lbd_2 10 00:12:03.144 02:07:11 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=87531184-6cc9-45a8-b1a1-b60fdf1b6ac7 00:12:03.144 02:07:11 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='87531184-6cc9-45a8-b1a1-b60fdf1b6ac7:1 ' 00:12:03.144 02:07:11 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:12:03.144 02:07:11 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 96fc7cb2-e5d0-40d7-a012-8ddafb380ca3 lbd_3 10 00:12:03.402 02:07:12 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=4f6d398b-84c8-4f1e-8a7f-bc8dd2c0a0ac 00:12:03.402 02:07:12 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='4f6d398b-84c8-4f1e-8a7f-bc8dd2c0a0ac:2 ' 00:12:03.402 02:07:12 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:12:03.402 02:07:12 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 96fc7cb2-e5d0-40d7-a012-8ddafb380ca3 lbd_4 10 00:12:03.660 02:07:12 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=cd0c9bd8-b19c-402a-b7cf-b55a59b9233f 00:12:03.660 02:07:12 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='cd0c9bd8-b19c-402a-b7cf-b55a59b9233f:3 ' 00:12:03.660 02:07:12 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:12:03.660 02:07:12 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 96fc7cb2-e5d0-40d7-a012-8ddafb380ca3 lbd_5 10 00:12:03.918 02:07:12 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=deb481f8-a38e-48be-a3ff-fa00a2e430c6 00:12:03.918 02:07:12 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='deb481f8-a38e-48be-a3ff-fa00a2e430c6:4 ' 00:12:03.918 02:07:12 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:12:03.918 02:07:12 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 96fc7cb2-e5d0-40d7-a012-8ddafb380ca3 lbd_6 10 00:12:03.918 02:07:12 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=aab7e9fc-e213-47fc-9c4b-b17ee967929e 00:12:03.919 02:07:12 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='aab7e9fc-e213-47fc-9c4b-b17ee967929e:5 ' 00:12:03.919 02:07:12 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:12:03.919 02:07:12 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 96fc7cb2-e5d0-40d7-a012-8ddafb380ca3 lbd_7 10 00:12:04.177 02:07:12 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=667a3da2-008d-4a90-aaf4-35ee4da0207a 00:12:04.177 02:07:12 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='667a3da2-008d-4a90-aaf4-35ee4da0207a:6 ' 00:12:04.177 02:07:12 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:12:04.177 02:07:12 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 96fc7cb2-e5d0-40d7-a012-8ddafb380ca3 lbd_8 10 00:12:04.435 02:07:13 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=36b0eeaa-4483-4453-adc7-26c6e7a54666 00:12:04.436 02:07:13 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='36b0eeaa-4483-4453-adc7-26c6e7a54666:7 ' 00:12:04.436 02:07:13 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:12:04.436 02:07:13 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 96fc7cb2-e5d0-40d7-a012-8ddafb380ca3 lbd_9 10 00:12:04.694 02:07:13 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=a79abdad-b5ca-4b2d-8f00-3dfaa9e48e86 00:12:04.694 02:07:13 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='a79abdad-b5ca-4b2d-8f00-3dfaa9e48e86:8 ' 00:12:04.694 02:07:13 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:12:04.694 02:07:13 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 96fc7cb2-e5d0-40d7-a012-8ddafb380ca3 lbd_10 10 00:12:04.694 02:07:13 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=c0f3a579-7caf-47bc-b041-d698ca5f2e8a 00:12:04.694 02:07:13 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='c0f3a579-7caf-47bc-b041-d698ca5f2e8a:9 ' 00:12:04.694 02:07:13 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target9 Target9_alias 'b45f4751-28df-473f-a84c-df58c2fd633b:0 87531184-6cc9-45a8-b1a1-b60fdf1b6ac7:1 4f6d398b-84c8-4f1e-8a7f-bc8dd2c0a0ac:2 cd0c9bd8-b19c-402a-b7cf-b55a59b9233f:3 deb481f8-a38e-48be-a3ff-fa00a2e430c6:4 aab7e9fc-e213-47fc-9c4b-b17ee967929e:5 667a3da2-008d-4a90-aaf4-35ee4da0207a:6 36b0eeaa-4483-4453-adc7-26c6e7a54666:7 a79abdad-b5ca-4b2d-8f00-3dfaa9e48e86:8 c0f3a579-7caf-47bc-b041-d698ca5f2e8a:9 ' 1:11 256 -d 00:12:04.964 02:07:13 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@45 -- # for i in $(seq 1 $NUM_LVS) 00:12:04.964 02:07:13 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@46 -- # INITIATOR_TAG=12 00:12:04.964 02:07:13 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_initiator_group 12 ANY 10.0.0.2/32 00:12:05.237 02:07:13 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@48 -- # '[' 10 -eq 1 ']' 00:12:05.237 02:07:13 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 128 512 00:12:05.495 02:07:14 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@56 -- # bdev=Malloc10 00:12:05.495 02:07:14 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore Malloc10 lvs_10 -c 1048576 00:12:05.753 02:07:14 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@58 -- # ls_guid=3cd8ee45-115d-49de-a167-8a3975fcc491 00:12:05.753 02:07:14 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@59 -- # LUNs= 00:12:05.753 02:07:14 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # seq 1 10 00:12:05.753 02:07:14 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:12:05.753 02:07:14 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 3cd8ee45-115d-49de-a167-8a3975fcc491 lbd_1 10 00:12:06.012 02:07:14 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=0d88c5fa-efdd-4582-b41f-6bea874dced2 00:12:06.012 02:07:14 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='0d88c5fa-efdd-4582-b41f-6bea874dced2:0 ' 00:12:06.012 02:07:14 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:12:06.012 02:07:14 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 3cd8ee45-115d-49de-a167-8a3975fcc491 lbd_2 10 00:12:06.271 02:07:14 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=c23ab7d2-6905-43ee-92ac-c4e4deac4728 00:12:06.271 02:07:14 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='c23ab7d2-6905-43ee-92ac-c4e4deac4728:1 ' 00:12:06.271 02:07:14 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:12:06.271 02:07:14 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 3cd8ee45-115d-49de-a167-8a3975fcc491 lbd_3 10 00:12:06.529 02:07:15 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=ed050e4b-462f-481a-a999-a239ee7fec48 00:12:06.529 02:07:15 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='ed050e4b-462f-481a-a999-a239ee7fec48:2 ' 00:12:06.530 02:07:15 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:12:06.530 02:07:15 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 3cd8ee45-115d-49de-a167-8a3975fcc491 lbd_4 10 00:12:06.788 02:07:15 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=20b57537-48b0-4a7b-9a4d-fbcae70e6095 00:12:06.788 02:07:15 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='20b57537-48b0-4a7b-9a4d-fbcae70e6095:3 ' 00:12:06.788 02:07:15 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:12:06.788 02:07:15 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 3cd8ee45-115d-49de-a167-8a3975fcc491 lbd_5 10 00:12:06.788 02:07:15 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=d18bebb5-05d5-43c2-9629-29dac62110f7 00:12:06.788 02:07:15 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='d18bebb5-05d5-43c2-9629-29dac62110f7:4 ' 00:12:06.788 02:07:15 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:12:06.788 02:07:15 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 3cd8ee45-115d-49de-a167-8a3975fcc491 lbd_6 10 00:12:07.047 02:07:15 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=f4957fda-7a94-4316-b21a-1d48cfefb70b 00:12:07.047 02:07:15 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='f4957fda-7a94-4316-b21a-1d48cfefb70b:5 ' 00:12:07.047 02:07:15 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:12:07.047 02:07:15 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 3cd8ee45-115d-49de-a167-8a3975fcc491 lbd_7 10 00:12:07.305 02:07:16 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=593d5779-8390-4b02-b00d-37c02de3e0c1 00:12:07.305 02:07:16 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='593d5779-8390-4b02-b00d-37c02de3e0c1:6 ' 00:12:07.305 02:07:16 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:12:07.305 02:07:16 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 3cd8ee45-115d-49de-a167-8a3975fcc491 lbd_8 10 00:12:07.564 02:07:16 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=626ef1e6-af83-4bae-842e-2ceaa43e146a 00:12:07.564 02:07:16 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='626ef1e6-af83-4bae-842e-2ceaa43e146a:7 ' 00:12:07.564 02:07:16 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:12:07.564 02:07:16 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 3cd8ee45-115d-49de-a167-8a3975fcc491 lbd_9 10 00:12:07.822 02:07:16 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=0f68124a-b7de-4e8f-ba67-3efb8c304067 00:12:07.822 02:07:16 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='0f68124a-b7de-4e8f-ba67-3efb8c304067:8 ' 00:12:07.822 02:07:16 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@60 -- # for j in $(seq 1 $NUM_LVOL) 00:12:07.822 02:07:16 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 3cd8ee45-115d-49de-a167-8a3975fcc491 lbd_10 10 00:12:08.081 02:07:16 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@61 -- # lb_name=5ba52cef-ed1e-4416-81c3-0fb583b4f498 00:12:08.081 02:07:16 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@62 -- # LUNs+='5ba52cef-ed1e-4416-81c3-0fb583b4f498:9 ' 00:12:08.081 02:07:16 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target10 Target10_alias '0d88c5fa-efdd-4582-b41f-6bea874dced2:0 c23ab7d2-6905-43ee-92ac-c4e4deac4728:1 ed050e4b-462f-481a-a999-a239ee7fec48:2 20b57537-48b0-4a7b-9a4d-fbcae70e6095:3 d18bebb5-05d5-43c2-9629-29dac62110f7:4 f4957fda-7a94-4316-b21a-1d48cfefb70b:5 593d5779-8390-4b02-b00d-37c02de3e0c1:6 626ef1e6-af83-4bae-842e-2ceaa43e146a:7 0f68124a-b7de-4e8f-ba67-3efb8c304067:8 5ba52cef-ed1e-4416-81c3-0fb583b4f498:9 ' 1:12 256 -d 00:12:08.339 02:07:16 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@66 -- # timing_exit setup 00:12:08.339 02:07:16 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:08.339 02:07:16 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@10 -- # set +x 00:12:08.339 02:07:16 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@68 -- # sleep 1 00:12:09.274 02:07:17 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@70 -- # timing_enter discovery 00:12:09.274 02:07:17 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:09.274 02:07:17 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@10 -- # set +x 00:12:09.275 02:07:17 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@71 -- # iscsiadm -m discovery -t sendtargets -p 10.0.0.1:3260 00:12:09.275 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target1 00:12:09.275 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target2 00:12:09.275 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target3 00:12:09.275 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target4 00:12:09.275 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target5 00:12:09.275 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target6 00:12:09.275 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target7 00:12:09.275 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target8 00:12:09.275 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target9 00:12:09.275 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target10 00:12:09.275 02:07:17 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@72 -- # iscsiadm -m node --login -p 10.0.0.1:3260 00:12:09.533 [2024-07-23 02:07:18.060764] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:09.533 [2024-07-23 02:07:18.062542] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:09.533 [2024-07-23 02:07:18.096037] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:09.534 [2024-07-23 02:07:18.105877] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:09.534 [2024-07-23 02:07:18.128975] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:09.534 [2024-07-23 02:07:18.162392] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:09.534 [2024-07-23 02:07:18.162393] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:09.534 [2024-07-23 02:07:18.167637] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:09.534 [2024-07-23 02:07:18.184876] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:09.534 [2024-07-23 02:07:18.188917] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:09.534 [2024-07-23 02:07:18.200968] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:09.534 [2024-07-23 02:07:18.214309] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:09.534 [2024-07-23 02:07:18.224873] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:09.534 [2024-07-23 02:07:18.239789] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:09.534 [2024-07-23 02:07:18.252053] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:09.534 [2024-07-23 02:07:18.294141] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:09.534 [2024-07-23 02:07:18.307575] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:09.792 [2024-07-23 02:07:18.320063] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:09.792 [2024-07-23 02:07:18.327515] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:09.792 [2024-07-23 02:07:18.348809] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:09.792 [2024-07-23 02:07:18.365542] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:09.792 [2024-07-23 02:07:18.369879] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:09.792 [2024-07-23 02:07:18.373237] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:09.792 [2024-07-23 02:07:18.381633] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:09.792 [2024-07-23 02:07:18.426001] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:09.792 [2024-07-23 02:07:18.430960] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:09.792 [2024-07-23 02:07:18.447858] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:09.792 [2024-07-23 02:07:18.512070] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:09.792 [2024-07-23 02:07:18.546486] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:10.051 [2024-07-23 02:07:18.634687] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:10.051 [2024-07-23 02:07:18.635278] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:10.051 [2024-07-23 02:07:18.705603] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:10.051 [2024-07-23 02:07:18.724554] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:10.051 [2024-07-23 02:07:18.747645] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:10.051 [2024-07-23 02:07:18.757993] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:10.051 [2024-07-23 02:07:18.767329] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:10.051 [2024-07-23 02:07:18.794906] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:10.051 [2024-07-23 02:07:18.806423] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:10.051 [2024-07-23 02:07:18.816172] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:10.312 [2024-07-23 02:07:18.854039] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:10.312 [2024-07-23 02:07:18.862140] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:10.312 [2024-07-23 02:07:18.868804] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:10.312 [2024-07-23 02:07:18.912053] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:10.312 [2024-07-23 02:07:18.914291] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:10.312 [2024-07-23 02:07:18.923892] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:10.312 [2024-07-23 02:07:18.947112] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:10.312 [2024-07-23 02:07:18.947196] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:10.312 [2024-07-23 02:07:18.970307] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:10.312 [2024-07-23 02:07:18.981565] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:10.312 [2024-07-23 02:07:18.997748] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:10.312 [2024-07-23 02:07:19.003913] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:10.312 [2024-07-23 02:07:19.008738] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:10.312 [2024-07-23 02:07:19.032429] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:10.312 [2024-07-23 02:07:19.046449] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:10.312 [2024-07-23 02:07:19.067348] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:10.571 [2024-07-23 02:07:19.137001] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:10.571 [2024-07-23 02:07:19.193520] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:10.571 [2024-07-23 02:07:19.202759] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:10.571 [2024-07-23 02:07:19.224705] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:10.571 [2024-07-23 02:07:19.248952] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:10.571 [2024-07-23 02:07:19.249426] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:10.571 [2024-07-23 02:07:19.261667] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:10.571 [2024-07-23 02:07:19.262865] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:10.571 [2024-07-23 02:07:19.301727] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:10.571 [2024-07-23 02:07:19.335537] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:10.830 [2024-07-23 02:07:19.356733] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:10.830 [2024-07-23 02:07:19.376151] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:10.830 [2024-07-23 02:07:19.405054] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:10.830 [2024-07-23 02:07:19.408350] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:10.830 [2024-07-23 02:07:19.436741] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:10.830 [2024-07-23 02:07:19.450322] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:10.830 [2024-07-23 02:07:19.474238] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:10.830 [2024-07-23 02:07:19.490814] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:10.830 [2024-07-23 02:07:19.511622] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:10.830 [2024-07-23 02:07:19.514736] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:10.830 [2024-07-23 02:07:19.551118] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:10.830 [2024-07-23 02:07:19.567019] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:10.830 [2024-07-23 02:07:19.576231] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:10.830 [2024-07-23 02:07:19.576921] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:10.830 [2024-07-23 02:07:19.583037] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:11.089 [2024-07-23 02:07:19.629095] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:11.089 [2024-07-23 02:07:19.629136] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:11.089 [2024-07-23 02:07:19.645906] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:11.089 [2024-07-23 02:07:19.660148] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:11.089 [2024-07-23 02:07:19.669888] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:11.089 [2024-07-23 02:07:19.682197] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:11.089 [2024-07-23 02:07:19.705702] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:11.089 [2024-07-23 02:07:19.713575] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:11.089 [2024-07-23 02:07:19.718321] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:11.089 [2024-07-23 02:07:19.731939] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:11.089 [2024-07-23 02:07:19.739119] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:11.089 [2024-07-23 02:07:19.754322] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:11.089 [2024-07-23 02:07:19.766389] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:11.089 [2024-07-23 02:07:19.785970] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:11.089 [2024-07-23 02:07:19.794575] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:11.089 [2024-07-23 02:07:19.810292] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:11.089 [2024-07-23 02:07:19.824384] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:11.089 [2024-07-23 02:07:19.862074] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:11.348 [2024-07-23 02:07:19.938439] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:11.348 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] 00:12:11.348 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target2, portal: 10.0.0.1,3260] 00:12:11.348 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] 00:12:11.348 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target4, portal: 10.0.0.1,3260] 00:12:11.348 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target5, portal: 10.0.0.1,3260] 00:12:11.348 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target6, portal: 10.0.0.1,3260] 00:12:11.348 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target7, portal: 10.0.0.1,3260] 00:12:11.348 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target8, portal: 10.0.0.1,3260] 00:12:11.348 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target9, portal: 10.0.0.1,3260] 00:12:11.348 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target10, portal: 10.0.0.1,3260] 00:12:11.348 Login to [iface: default, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] successful. 00:12:11.348 Login to [iface: default, target: iqn.2016-06.io.spdk:Target2, portal: 10.0.0.1,3260] successful. 00:12:11.348 Login to [iface: default, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] successful. 00:12:11.348 Login to [iface: default, target: iqn.2016-06.io.spdk:Target4, portal: 10.0.0.1,3260] successful. 00:12:11.348 Login to [iface: default, target: iqn.2016-06.io.spdk:Target5, portal: 10.0.0.1,3260] successful. 00:12:11.348 Login to [iface: default, target: iqn.2016-06.io.spdk:Target6, portal: 10.0.0.1,3260] successful. 00:12:11.348 Login to [iface: default, target: iqn.2016-06.io.spdk:Target7, portal: 10.0.0.1,3260] successful. 00:12:11.348 Login to [iface: default, target: iqn.2016-06.io.spdk:Target8, portal: 10.0.0.1,3260] successful. 00:12:11.348 Login to [iface: default, target: iqn.2016-06.io.spdk:Target9, portal: 10.0.0.1,3260] successful. 00:12:11.348 Login to [iface: default, target: iqn.2016-06.io.spdk:Target10, portal: 10.0.0.1,3260] successful. 00:12:11.348 02:07:19 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@73 -- # waitforiscsidevices 100 00:12:11.348 02:07:19 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@116 -- # local num=100 00:12:11.348 [2024-07-23 02:07:19.958512] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:11.348 02:07:19 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@118 -- # (( i = 1 )) 00:12:11.348 02:07:19 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@118 -- # (( i <= 20 )) 00:12:11.348 02:07:19 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@119 -- # iscsiadm -m session -P 3 00:12:11.348 02:07:19 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@119 -- # grep -c 'Attached scsi disk sd[a-z]*' 00:12:11.348 02:07:19 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@119 -- # n=100 00:12:11.348 02:07:19 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@120 -- # '[' 100 -ne 100 ']' 00:12:11.348 02:07:19 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@123 -- # return 0 00:12:11.348 02:07:19 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@74 -- # timing_exit discovery 00:12:11.348 02:07:19 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:11.348 02:07:19 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@10 -- # set +x 00:12:11.348 02:07:20 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@76 -- # timing_enter fio 00:12:11.348 02:07:20 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:11.348 02:07:20 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@10 -- # set +x 00:12:11.348 02:07:20 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p iscsi -i 131072 -d 8 -t randwrite -r 10 -v 00:12:11.348 [global] 00:12:11.348 thread=1 00:12:11.348 invalidate=1 00:12:11.348 rw=randwrite 00:12:11.348 time_based=1 00:12:11.348 runtime=10 00:12:11.348 ioengine=libaio 00:12:11.348 direct=1 00:12:11.348 bs=131072 00:12:11.348 iodepth=8 00:12:11.348 norandommap=0 00:12:11.348 numjobs=1 00:12:11.348 00:12:11.348 verify_dump=1 00:12:11.348 verify_backlog=512 00:12:11.348 verify_state_save=0 00:12:11.348 do_verify=1 00:12:11.348 verify=crc32c-intel 00:12:11.348 [job0] 00:12:11.348 filename=/dev/sdb 00:12:11.348 [job1] 00:12:11.348 filename=/dev/sdd 00:12:11.348 [job2] 00:12:11.348 filename=/dev/sdf 00:12:11.348 [job3] 00:12:11.348 filename=/dev/sdh 00:12:11.348 [job4] 00:12:11.348 filename=/dev/sdi 00:12:11.348 [job5] 00:12:11.348 filename=/dev/sdm 00:12:11.348 [job6] 00:12:11.348 filename=/dev/sdq 00:12:11.348 [job7] 00:12:11.348 filename=/dev/sdt 00:12:11.607 [job8] 00:12:11.607 filename=/dev/sdu 00:12:11.607 [job9] 00:12:11.607 filename=/dev/sdy 00:12:11.607 [job10] 00:12:11.607 filename=/dev/sdj 00:12:11.607 [job11] 00:12:11.607 filename=/dev/sdl 00:12:11.607 [job12] 00:12:11.607 filename=/dev/sdo 00:12:11.607 [job13] 00:12:11.607 filename=/dev/sdr 00:12:11.607 [job14] 00:12:11.607 filename=/dev/sdv 00:12:11.607 [job15] 00:12:11.607 filename=/dev/sdx 00:12:11.607 [job16] 00:12:11.607 filename=/dev/sdz 00:12:11.607 [job17] 00:12:11.607 filename=/dev/sdab 00:12:11.607 [job18] 00:12:11.607 filename=/dev/sdac 00:12:11.607 [job19] 00:12:11.607 filename=/dev/sdad 00:12:11.607 [job20] 00:12:11.607 filename=/dev/sdae 00:12:11.607 [job21] 00:12:11.607 filename=/dev/sdaf 00:12:11.607 [job22] 00:12:11.607 filename=/dev/sdah 00:12:11.607 [job23] 00:12:11.607 filename=/dev/sdaj 00:12:11.607 [job24] 00:12:11.607 filename=/dev/sdal 00:12:11.607 [job25] 00:12:11.607 filename=/dev/sdao 00:12:11.607 [job26] 00:12:11.607 filename=/dev/sdaq 00:12:11.607 [job27] 00:12:11.607 filename=/dev/sdat 00:12:11.607 [job28] 00:12:11.607 filename=/dev/sday 00:12:11.607 [job29] 00:12:11.607 filename=/dev/sdba 00:12:11.607 [job30] 00:12:11.607 filename=/dev/sdag 00:12:11.607 [job31] 00:12:11.607 filename=/dev/sdai 00:12:11.607 [job32] 00:12:11.607 filename=/dev/sdak 00:12:11.607 [job33] 00:12:11.607 filename=/dev/sdam 00:12:11.607 [job34] 00:12:11.607 filename=/dev/sdan 00:12:11.607 [job35] 00:12:11.607 filename=/dev/sdap 00:12:11.607 [job36] 00:12:11.607 filename=/dev/sdar 00:12:11.607 [job37] 00:12:11.608 filename=/dev/sdau 00:12:11.608 [job38] 00:12:11.608 filename=/dev/sdaw 00:12:11.608 [job39] 00:12:11.608 filename=/dev/sdaz 00:12:11.608 [job40] 00:12:11.608 filename=/dev/sdas 00:12:11.608 [job41] 00:12:11.608 filename=/dev/sdav 00:12:11.608 [job42] 00:12:11.608 filename=/dev/sdax 00:12:11.608 [job43] 00:12:11.608 filename=/dev/sdbb 00:12:11.608 [job44] 00:12:11.608 filename=/dev/sdbc 00:12:11.608 [job45] 00:12:11.608 filename=/dev/sdbd 00:12:11.608 [job46] 00:12:11.608 filename=/dev/sdbe 00:12:11.608 [job47] 00:12:11.608 filename=/dev/sdbg 00:12:11.608 [job48] 00:12:11.608 filename=/dev/sdbi 00:12:11.608 [job49] 00:12:11.608 filename=/dev/sdbk 00:12:11.608 [job50] 00:12:11.608 filename=/dev/sdbf 00:12:11.608 [job51] 00:12:11.608 filename=/dev/sdbh 00:12:11.608 [job52] 00:12:11.608 filename=/dev/sdbj 00:12:11.608 [job53] 00:12:11.608 filename=/dev/sdbl 00:12:11.608 [job54] 00:12:11.608 filename=/dev/sdbm 00:12:11.608 [job55] 00:12:11.608 filename=/dev/sdbn 00:12:11.608 [job56] 00:12:11.608 filename=/dev/sdbo 00:12:11.608 [job57] 00:12:11.608 filename=/dev/sdbp 00:12:11.608 [job58] 00:12:11.608 filename=/dev/sdbq 00:12:11.608 [job59] 00:12:11.608 filename=/dev/sdbs 00:12:11.608 [job60] 00:12:11.608 filename=/dev/sdbr 00:12:11.608 [job61] 00:12:11.608 filename=/dev/sdbt 00:12:11.608 [job62] 00:12:11.608 filename=/dev/sdbv 00:12:11.608 [job63] 00:12:11.608 filename=/dev/sdby 00:12:11.608 [job64] 00:12:11.608 filename=/dev/sdca 00:12:11.608 [job65] 00:12:11.608 filename=/dev/sdcc 00:12:11.608 [job66] 00:12:11.608 filename=/dev/sdcg 00:12:11.608 [job67] 00:12:11.608 filename=/dev/sdcj 00:12:11.608 [job68] 00:12:11.608 filename=/dev/sdcm 00:12:11.608 [job69] 00:12:11.608 filename=/dev/sdcr 00:12:11.608 [job70] 00:12:11.608 filename=/dev/sdbu 00:12:11.608 [job71] 00:12:11.608 filename=/dev/sdbw 00:12:11.608 [job72] 00:12:11.608 filename=/dev/sdbz 00:12:11.608 [job73] 00:12:11.608 filename=/dev/sdce 00:12:11.608 [job74] 00:12:11.608 filename=/dev/sdcf 00:12:11.608 [job75] 00:12:11.608 filename=/dev/sdch 00:12:11.608 [job76] 00:12:11.608 filename=/dev/sdck 00:12:11.608 [job77] 00:12:11.608 filename=/dev/sdcn 00:12:11.608 [job78] 00:12:11.608 filename=/dev/sdcp 00:12:11.608 [job79] 00:12:11.608 filename=/dev/sdcs 00:12:11.608 [job80] 00:12:11.608 filename=/dev/sdbx 00:12:11.608 [job81] 00:12:11.608 filename=/dev/sdcb 00:12:11.608 [job82] 00:12:11.608 filename=/dev/sdcd 00:12:11.608 [job83] 00:12:11.608 filename=/dev/sdci 00:12:11.608 [job84] 00:12:11.608 filename=/dev/sdcl 00:12:11.608 [job85] 00:12:11.608 filename=/dev/sdco 00:12:11.608 [job86] 00:12:11.608 filename=/dev/sdcq 00:12:11.608 [job87] 00:12:11.608 filename=/dev/sdct 00:12:11.608 [job88] 00:12:11.608 filename=/dev/sdcu 00:12:11.608 [job89] 00:12:11.608 filename=/dev/sdcv 00:12:11.608 [job90] 00:12:11.608 filename=/dev/sda 00:12:11.608 [job91] 00:12:11.608 filename=/dev/sdc 00:12:11.608 [job92] 00:12:11.608 filename=/dev/sde 00:12:11.608 [job93] 00:12:11.608 filename=/dev/sdg 00:12:11.608 [job94] 00:12:11.608 filename=/dev/sdk 00:12:11.608 [job95] 00:12:11.608 filename=/dev/sdn 00:12:11.608 [job96] 00:12:11.608 filename=/dev/sdp 00:12:11.608 [job97] 00:12:11.608 filename=/dev/sds 00:12:11.608 [job98] 00:12:11.608 filename=/dev/sdw 00:12:11.608 [job99] 00:12:11.608 filename=/dev/sdaa 00:12:12.986 queue_depth set to 113 (sdb) 00:12:12.986 queue_depth set to 113 (sdd) 00:12:12.986 queue_depth set to 113 (sdf) 00:12:12.986 queue_depth set to 113 (sdh) 00:12:12.986 queue_depth set to 113 (sdi) 00:12:12.986 queue_depth set to 113 (sdm) 00:12:12.986 queue_depth set to 113 (sdq) 00:12:12.986 queue_depth set to 113 (sdt) 00:12:12.986 queue_depth set to 113 (sdu) 00:12:12.986 queue_depth set to 113 (sdy) 00:12:12.986 queue_depth set to 113 (sdj) 00:12:13.245 queue_depth set to 113 (sdl) 00:12:13.245 queue_depth set to 113 (sdo) 00:12:13.245 queue_depth set to 113 (sdr) 00:12:13.245 queue_depth set to 113 (sdv) 00:12:13.245 queue_depth set to 113 (sdx) 00:12:13.245 queue_depth set to 113 (sdz) 00:12:13.245 queue_depth set to 113 (sdab) 00:12:13.245 queue_depth set to 113 (sdac) 00:12:13.245 queue_depth set to 113 (sdad) 00:12:13.245 queue_depth set to 113 (sdae) 00:12:13.245 queue_depth set to 113 (sdaf) 00:12:13.504 queue_depth set to 113 (sdah) 00:12:13.504 queue_depth set to 113 (sdaj) 00:12:13.504 queue_depth set to 113 (sdal) 00:12:13.504 queue_depth set to 113 (sdao) 00:12:13.504 queue_depth set to 113 (sdaq) 00:12:13.504 queue_depth set to 113 (sdat) 00:12:13.504 queue_depth set to 113 (sday) 00:12:13.504 queue_depth set to 113 (sdba) 00:12:13.504 queue_depth set to 113 (sdag) 00:12:13.504 queue_depth set to 113 (sdai) 00:12:13.504 queue_depth set to 113 (sdak) 00:12:13.504 queue_depth set to 113 (sdam) 00:12:13.763 queue_depth set to 113 (sdan) 00:12:13.763 queue_depth set to 113 (sdap) 00:12:13.763 queue_depth set to 113 (sdar) 00:12:13.763 queue_depth set to 113 (sdau) 00:12:13.763 queue_depth set to 113 (sdaw) 00:12:13.763 queue_depth set to 113 (sdaz) 00:12:13.763 queue_depth set to 113 (sdas) 00:12:13.763 queue_depth set to 113 (sdav) 00:12:13.763 queue_depth set to 113 (sdax) 00:12:13.763 queue_depth set to 113 (sdbb) 00:12:13.763 queue_depth set to 113 (sdbc) 00:12:13.763 queue_depth set to 113 (sdbd) 00:12:14.022 queue_depth set to 113 (sdbe) 00:12:14.022 queue_depth set to 113 (sdbg) 00:12:14.022 queue_depth set to 113 (sdbi) 00:12:14.022 queue_depth set to 113 (sdbk) 00:12:14.022 queue_depth set to 113 (sdbf) 00:12:14.022 queue_depth set to 113 (sdbh) 00:12:14.022 queue_depth set to 113 (sdbj) 00:12:14.022 queue_depth set to 113 (sdbl) 00:12:14.022 queue_depth set to 113 (sdbm) 00:12:14.022 queue_depth set to 113 (sdbn) 00:12:14.022 queue_depth set to 113 (sdbo) 00:12:14.022 queue_depth set to 113 (sdbp) 00:12:14.281 queue_depth set to 113 (sdbq) 00:12:14.281 queue_depth set to 113 (sdbs) 00:12:14.281 queue_depth set to 113 (sdbr) 00:12:14.281 queue_depth set to 113 (sdbt) 00:12:14.281 queue_depth set to 113 (sdbv) 00:12:14.281 queue_depth set to 113 (sdby) 00:12:14.281 queue_depth set to 113 (sdca) 00:12:14.281 queue_depth set to 113 (sdcc) 00:12:14.281 queue_depth set to 113 (sdcg) 00:12:14.281 queue_depth set to 113 (sdcj) 00:12:14.281 queue_depth set to 113 (sdcm) 00:12:14.541 queue_depth set to 113 (sdcr) 00:12:14.541 queue_depth set to 113 (sdbu) 00:12:14.541 queue_depth set to 113 (sdbw) 00:12:14.541 queue_depth set to 113 (sdbz) 00:12:14.541 queue_depth set to 113 (sdce) 00:12:14.541 queue_depth set to 113 (sdcf) 00:12:14.541 queue_depth set to 113 (sdch) 00:12:14.541 queue_depth set to 113 (sdck) 00:12:14.541 queue_depth set to 113 (sdcn) 00:12:14.541 queue_depth set to 113 (sdcp) 00:12:14.541 queue_depth set to 113 (sdcs) 00:12:14.541 queue_depth set to 113 (sdbx) 00:12:14.799 queue_depth set to 113 (sdcb) 00:12:14.799 queue_depth set to 113 (sdcd) 00:12:14.799 queue_depth set to 113 (sdci) 00:12:14.799 queue_depth set to 113 (sdcl) 00:12:14.799 queue_depth set to 113 (sdco) 00:12:14.799 queue_depth set to 113 (sdcq) 00:12:14.799 queue_depth set to 113 (sdct) 00:12:14.799 queue_depth set to 113 (sdcu) 00:12:14.799 queue_depth set to 113 (sdcv) 00:12:14.799 queue_depth set to 113 (sda) 00:12:14.799 queue_depth set to 113 (sdc) 00:12:14.799 queue_depth set to 113 (sde) 00:12:15.058 queue_depth set to 113 (sdg) 00:12:15.058 queue_depth set to 113 (sdk) 00:12:15.058 queue_depth set to 113 (sdn) 00:12:15.058 queue_depth set to 113 (sdp) 00:12:15.058 queue_depth set to 113 (sds) 00:12:15.058 queue_depth set to 113 (sdw) 00:12:15.058 queue_depth set to 113 (sdaa) 00:12:15.317 job0: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:12:15.317 job1: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:12:15.317 job2: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:12:15.317 job3: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:12:15.317 job4: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:12:15.317 job5: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:12:15.317 job6: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:12:15.317 job7: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:12:15.317 job8: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:12:15.317 job9: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:12:15.317 job10: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:12:15.317 job11: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:12:15.317 job12: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:12:15.317 job13: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:12:15.317 job14: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:12:15.317 job15: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:12:15.317 job16: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:12:15.317 job17: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:12:15.317 job18: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:12:15.317 job19: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:12:15.317 job20: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:12:15.317 job21: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:12:15.317 job22: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:12:15.318 job23: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:12:15.318 job24: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:12:15.318 job25: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:12:15.318 job26: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:12:15.318 job27: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:12:15.318 job28: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:12:15.318 job29: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:12:15.318 job30: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:12:15.318 job31: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:12:15.318 job32: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:12:15.318 job33: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:12:15.318 job34: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:12:15.318 job35: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:12:15.318 job36: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:12:15.318 job37: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:12:15.318 job38: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:12:15.318 job39: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:12:15.318 job40: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:12:15.318 job41: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:12:15.318 job42: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:12:15.318 job43: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:12:15.318 job44: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:12:15.318 job45: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:12:15.318 job46: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:12:15.318 job47: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:12:15.318 job48: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:12:15.318 job49: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:12:15.318 job50: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:12:15.318 job51: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:12:15.318 job52: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:12:15.318 job53: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:12:15.318 job54: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:12:15.318 job55: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:12:15.318 job56: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:12:15.318 job57: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:12:15.318 job58: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:12:15.318 job59: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:12:15.318 job60: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:12:15.318 job61: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:12:15.318 job62: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:12:15.318 job63: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:12:15.318 job64: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:12:15.318 job65: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:12:15.318 job66: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:12:15.318 job67: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:12:15.318 job68: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:12:15.318 job69: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:12:15.318 job70: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:12:15.318 job71: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:12:15.318 job72: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:12:15.318 job73: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:12:15.318 job74: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:12:15.318 job75: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:12:15.318 job76: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:12:15.318 job77: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:12:15.318 job78: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:12:15.318 job79: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:12:15.318 job80: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:12:15.318 job81: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:12:15.318 job82: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:12:15.318 job83: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:12:15.318 job84: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:12:15.318 job85: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:12:15.318 job86: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:12:15.318 job87: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:12:15.318 job88: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:12:15.318 job89: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:12:15.318 job90: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:12:15.318 job91: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:12:15.318 job92: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:12:15.318 job93: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:12:15.318 job94: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:12:15.318 job95: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:12:15.318 job96: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:12:15.318 job97: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:12:15.318 job98: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:12:15.318 job99: (g=0): rw=randwrite, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=8 00:12:15.318 fio-3.35 00:12:15.318 Starting 100 threads 00:12:15.318 [2024-07-23 02:07:24.083972] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:15.318 [2024-07-23 02:07:24.088787] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:15.318 [2024-07-23 02:07:24.093003] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:15.577 [2024-07-23 02:07:24.095387] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:15.577 [2024-07-23 02:07:24.097877] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:15.577 [2024-07-23 02:07:24.100315] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:15.578 [2024-07-23 02:07:24.102787] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:15.578 [2024-07-23 02:07:24.105321] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:15.578 [2024-07-23 02:07:24.108000] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:15.578 [2024-07-23 02:07:24.110613] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:15.578 [2024-07-23 02:07:24.113042] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:15.578 [2024-07-23 02:07:24.115630] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:15.578 [2024-07-23 02:07:24.118177] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:15.578 [2024-07-23 02:07:24.120836] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:15.578 [2024-07-23 02:07:24.123334] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:15.578 [2024-07-23 02:07:24.125729] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:15.578 [2024-07-23 02:07:24.127965] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:15.578 [2024-07-23 02:07:24.130416] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:15.578 [2024-07-23 02:07:24.133153] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:15.578 [2024-07-23 02:07:24.135640] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:15.578 [2024-07-23 02:07:24.139298] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:15.578 [2024-07-23 02:07:24.142321] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:15.578 [2024-07-23 02:07:24.146347] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:15.578 [2024-07-23 02:07:24.150330] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:15.578 [2024-07-23 02:07:24.153958] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:15.578 [2024-07-23 02:07:24.160061] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:15.578 [2024-07-23 02:07:24.162651] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:15.578 [2024-07-23 02:07:24.166545] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:15.578 [2024-07-23 02:07:24.170052] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:15.578 [2024-07-23 02:07:24.173203] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:15.578 [2024-07-23 02:07:24.175953] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:15.578 [2024-07-23 02:07:24.178404] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:15.578 [2024-07-23 02:07:24.180776] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:15.578 [2024-07-23 02:07:24.183045] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:15.578 [2024-07-23 02:07:24.185412] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:15.578 [2024-07-23 02:07:24.188145] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:15.578 [2024-07-23 02:07:24.190777] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:15.578 [2024-07-23 02:07:24.193621] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:15.578 [2024-07-23 02:07:24.196232] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:15.578 [2024-07-23 02:07:24.198534] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:15.578 [2024-07-23 02:07:24.201237] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:15.578 [2024-07-23 02:07:24.204196] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:15.578 [2024-07-23 02:07:24.208807] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:15.578 [2024-07-23 02:07:24.211373] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:15.578 [2024-07-23 02:07:24.213441] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:15.578 [2024-07-23 02:07:24.215784] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:15.578 [2024-07-23 02:07:24.218198] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:15.578 [2024-07-23 02:07:24.220864] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:15.578 [2024-07-23 02:07:24.223986] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:15.578 [2024-07-23 02:07:24.227031] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:15.578 [2024-07-23 02:07:24.230033] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:15.578 [2024-07-23 02:07:24.232403] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:15.578 [2024-07-23 02:07:24.235146] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:15.578 [2024-07-23 02:07:24.237681] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:15.578 [2024-07-23 02:07:24.240238] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:15.578 [2024-07-23 02:07:24.242597] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:15.578 [2024-07-23 02:07:24.245145] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:15.578 [2024-07-23 02:07:24.247339] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:15.578 [2024-07-23 02:07:24.250045] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:15.578 [2024-07-23 02:07:24.253571] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:15.578 [2024-07-23 02:07:24.256326] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:15.578 [2024-07-23 02:07:24.259719] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:15.578 [2024-07-23 02:07:24.262742] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:15.578 [2024-07-23 02:07:24.266326] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:15.578 [2024-07-23 02:07:24.269529] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:15.578 [2024-07-23 02:07:24.272416] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:15.578 [2024-07-23 02:07:24.276296] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:15.578 [2024-07-23 02:07:24.279131] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:15.578 [2024-07-23 02:07:24.282951] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:15.578 [2024-07-23 02:07:24.287626] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:15.578 [2024-07-23 02:07:24.290226] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:15.578 [2024-07-23 02:07:24.292640] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:15.578 [2024-07-23 02:07:24.294718] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:15.578 [2024-07-23 02:07:24.297072] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:15.578 [2024-07-23 02:07:24.299056] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:15.578 [2024-07-23 02:07:24.301377] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:15.578 [2024-07-23 02:07:24.303429] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:15.578 [2024-07-23 02:07:24.305914] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:15.578 [2024-07-23 02:07:24.307920] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:15.578 [2024-07-23 02:07:24.310369] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:15.578 [2024-07-23 02:07:24.313025] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:15.578 [2024-07-23 02:07:24.315914] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:15.578 [2024-07-23 02:07:24.319111] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:15.578 [2024-07-23 02:07:24.322087] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:15.578 [2024-07-23 02:07:24.325366] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:15.578 [2024-07-23 02:07:24.328993] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:15.578 [2024-07-23 02:07:24.331257] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:15.578 [2024-07-23 02:07:24.333955] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:15.578 [2024-07-23 02:07:24.336492] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:15.578 [2024-07-23 02:07:24.339261] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:15.578 [2024-07-23 02:07:24.341797] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:15.578 [2024-07-23 02:07:24.344397] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:15.578 [2024-07-23 02:07:24.347025] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:15.578 [2024-07-23 02:07:24.349967] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:15.578 [2024-07-23 02:07:24.352299] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:15.578 [2024-07-23 02:07:24.354473] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:15.837 [2024-07-23 02:07:24.356940] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:15.837 [2024-07-23 02:07:24.359615] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:15.837 [2024-07-23 02:07:24.361680] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:15.837 [2024-07-23 02:07:24.364038] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:20.027 [2024-07-23 02:07:28.376707] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:20.027 [2024-07-23 02:07:28.422075] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:20.027 [2024-07-23 02:07:28.595683] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:20.027 [2024-07-23 02:07:28.658339] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:20.027 [2024-07-23 02:07:28.737110] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:20.286 [2024-07-23 02:07:28.859065] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:20.286 [2024-07-23 02:07:29.026808] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:20.549 [2024-07-23 02:07:29.234331] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:20.807 [2024-07-23 02:07:29.426278] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:20.807 [2024-07-23 02:07:29.520037] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:21.094 [2024-07-23 02:07:29.656499] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:21.094 [2024-07-23 02:07:29.718983] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:21.095 [2024-07-23 02:07:29.822445] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:21.352 [2024-07-23 02:07:29.919474] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:21.352 [2024-07-23 02:07:29.992590] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:21.352 [2024-07-23 02:07:30.067900] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:21.352 [2024-07-23 02:07:30.106312] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:21.616 [2024-07-23 02:07:30.160800] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:21.616 [2024-07-23 02:07:30.207635] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:21.616 [2024-07-23 02:07:30.270213] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:21.616 [2024-07-23 02:07:30.345028] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:21.877 [2024-07-23 02:07:30.410151] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:21.877 [2024-07-23 02:07:30.525085] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:21.877 [2024-07-23 02:07:30.615553] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:22.135 [2024-07-23 02:07:30.686011] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:22.135 [2024-07-23 02:07:30.761776] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:22.135 [2024-07-23 02:07:30.851632] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:22.135 [2024-07-23 02:07:30.908597] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:22.394 [2024-07-23 02:07:30.983030] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:22.394 [2024-07-23 02:07:31.081218] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:22.394 [2024-07-23 02:07:31.159065] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:22.652 [2024-07-23 02:07:31.232938] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:22.652 [2024-07-23 02:07:31.316059] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:22.911 [2024-07-23 02:07:31.465182] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:22.911 [2024-07-23 02:07:31.531324] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:22.911 [2024-07-23 02:07:31.641539] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:23.169 [2024-07-23 02:07:31.725612] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:23.169 [2024-07-23 02:07:31.819194] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:23.169 [2024-07-23 02:07:31.908829] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:23.428 [2024-07-23 02:07:32.044473] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:23.428 [2024-07-23 02:07:32.170686] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:23.686 [2024-07-23 02:07:32.249721] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:23.686 [2024-07-23 02:07:32.303670] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:23.686 [2024-07-23 02:07:32.351363] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:23.686 [2024-07-23 02:07:32.422258] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:23.945 [2024-07-23 02:07:32.531455] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:23.945 [2024-07-23 02:07:32.611196] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:23.945 [2024-07-23 02:07:32.706607] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:24.203 [2024-07-23 02:07:32.831782] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:24.203 [2024-07-23 02:07:32.952599] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:24.462 [2024-07-23 02:07:33.200353] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:24.728 [2024-07-23 02:07:33.465384] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:24.995 [2024-07-23 02:07:33.571369] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:24.995 [2024-07-23 02:07:33.649087] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:25.255 [2024-07-23 02:07:33.777033] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:25.255 [2024-07-23 02:07:33.852410] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:25.255 [2024-07-23 02:07:33.953694] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:25.255 [2024-07-23 02:07:34.000572] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:25.514 [2024-07-23 02:07:34.046915] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:25.774 [2024-07-23 02:07:34.426706] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:25.774 [2024-07-23 02:07:34.513486] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:26.055 [2024-07-23 02:07:34.637639] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:26.055 [2024-07-23 02:07:34.701346] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:26.321 [2024-07-23 02:07:34.827161] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:26.321 [2024-07-23 02:07:34.928566] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:26.321 [2024-07-23 02:07:35.053364] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:26.581 [2024-07-23 02:07:35.124440] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:26.581 [2024-07-23 02:07:35.323167] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:26.840 [2024-07-23 02:07:35.403482] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:26.840 [2024-07-23 02:07:35.498352] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:26.840 [2024-07-23 02:07:35.565831] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:27.099 [2024-07-23 02:07:35.649916] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:27.099 [2024-07-23 02:07:35.710378] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:27.099 [2024-07-23 02:07:35.761770] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:27.099 [2024-07-23 02:07:35.842665] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:27.359 [2024-07-23 02:07:35.919973] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:27.359 [2024-07-23 02:07:36.042014] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:27.618 [2024-07-23 02:07:36.155917] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:27.618 [2024-07-23 02:07:36.258838] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:27.618 [2024-07-23 02:07:36.332126] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:27.618 [2024-07-23 02:07:36.380982] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:27.877 [2024-07-23 02:07:36.447495] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:27.877 [2024-07-23 02:07:36.589329] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:28.135 [2024-07-23 02:07:36.756522] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:28.135 [2024-07-23 02:07:36.797765] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:28.394 [2024-07-23 02:07:36.924993] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:28.394 [2024-07-23 02:07:37.002703] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:28.394 [2024-07-23 02:07:37.135437] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:28.652 [2024-07-23 02:07:37.179868] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:28.652 [2024-07-23 02:07:37.325631] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:28.652 [2024-07-23 02:07:37.372640] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:28.652 [2024-07-23 02:07:37.393621] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:28.652 [2024-07-23 02:07:37.398271] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:28.652 [2024-07-23 02:07:37.400855] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:28.652 [2024-07-23 02:07:37.404934] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:28.652 [2024-07-23 02:07:37.406926] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:28.652 [2024-07-23 02:07:37.409013] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:28.652 [2024-07-23 02:07:37.411020] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:28.652 [2024-07-23 02:07:37.413341] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:28.652 [2024-07-23 02:07:37.415512] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:28.652 [2024-07-23 02:07:37.417442] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:28.652 [2024-07-23 02:07:37.419376] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:28.652 [2024-07-23 02:07:37.421429] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:28.652 [2024-07-23 02:07:37.423841] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:28.652 [2024-07-23 02:07:37.427209] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:28.652 [2024-07-23 02:07:37.429149] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:28.921 [2024-07-23 02:07:37.431239] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:28.921 [2024-07-23 02:07:37.433833] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:28.921 [2024-07-23 02:07:37.435780] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:28.921 [2024-07-23 02:07:37.437811] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:28.921 [2024-07-23 02:07:37.440098] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:28.921 [2024-07-23 02:07:37.442063] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:28.921 00:12:28.921 job0: (groupid=0, jobs=1): err= 0: pid=70299: Tue Jul 23 02:07:37 2024 00:12:28.921 read: IOPS=58, BW=7446KiB/s (7625kB/s)(60.0MiB/8251msec) 00:12:28.921 slat (usec): min=7, max=2650, avg=64.10, stdev=157.13 00:12:28.921 clat (usec): min=5684, max=64534, avg=19090.31, stdev=10773.90 00:12:28.921 lat (usec): min=5702, max=64592, avg=19154.40, stdev=10775.27 00:12:28.921 clat percentiles (usec): 00:12:28.921 | 1.00th=[ 7635], 5.00th=[ 8586], 10.00th=[ 9372], 20.00th=[10421], 00:12:28.921 | 30.00th=[11731], 40.00th=[14484], 50.00th=[16319], 60.00th=[18220], 00:12:28.921 | 70.00th=[20317], 80.00th=[25822], 90.00th=[34341], 95.00th=[41157], 00:12:28.921 | 99.00th=[57934], 99.50th=[60556], 99.90th=[64750], 99.95th=[64750], 00:12:28.921 | 99.99th=[64750] 00:12:28.921 write: IOPS=64, BW=8265KiB/s (8463kB/s)(72.0MiB/8921msec); 0 zone resets 00:12:28.922 slat (usec): min=42, max=13672, avg=158.02, stdev=583.27 00:12:28.922 clat (msec): min=14, max=388, avg=123.14, stdev=48.54 00:12:28.922 lat (msec): min=14, max=388, avg=123.29, stdev=48.52 00:12:28.922 clat percentiles (msec): 00:12:28.922 | 1.00th=[ 31], 5.00th=[ 84], 10.00th=[ 85], 20.00th=[ 88], 00:12:28.922 | 30.00th=[ 91], 40.00th=[ 96], 50.00th=[ 106], 60.00th=[ 117], 00:12:28.922 | 70.00th=[ 138], 80.00th=[ 157], 90.00th=[ 194], 95.00th=[ 222], 00:12:28.922 | 99.00th=[ 264], 99.50th=[ 355], 99.90th=[ 388], 99.95th=[ 388], 00:12:28.922 | 99.99th=[ 388] 00:12:28.922 bw ( KiB/s): min= 2048, max=11264, per=0.92%, avg=7666.42, stdev=2886.89, samples=19 00:12:28.922 iops : min= 16, max= 88, avg=59.84, stdev=22.62, samples=19 00:12:28.922 lat (msec) : 10=7.20%, 20=24.62%, 50=13.26%, 100=25.19%, 250=28.98% 00:12:28.922 lat (msec) : 500=0.76% 00:12:28.922 cpu : usr=0.42%, sys=0.26%, ctx=1711, majf=0, minf=3 00:12:28.922 IO depths : 1=0.7%, 2=1.3%, 4=2.7%, 8=95.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:28.922 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:28.922 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:28.922 issued rwts: total=480,576,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:28.922 latency : target=0, window=0, percentile=100.00%, depth=8 00:12:28.922 job1: (groupid=0, jobs=1): err= 0: pid=70300: Tue Jul 23 02:07:37 2024 00:12:28.922 read: IOPS=52, BW=6677KiB/s (6837kB/s)(47.5MiB/7285msec) 00:12:28.922 slat (usec): min=7, max=1111, avg=76.29, stdev=122.98 00:12:28.922 clat (msec): min=4, max=480, avg=35.11, stdev=57.90 00:12:28.922 lat (msec): min=4, max=480, avg=35.19, stdev=57.90 00:12:28.922 clat percentiles (msec): 00:12:28.922 | 1.00th=[ 10], 5.00th=[ 12], 10.00th=[ 14], 20.00th=[ 15], 00:12:28.922 | 30.00th=[ 17], 40.00th=[ 18], 50.00th=[ 20], 60.00th=[ 23], 00:12:28.922 | 70.00th=[ 28], 80.00th=[ 34], 90.00th=[ 51], 95.00th=[ 92], 00:12:28.922 | 99.00th=[ 359], 99.50th=[ 481], 99.90th=[ 481], 99.95th=[ 481], 00:12:28.922 | 99.99th=[ 481] 00:12:28.922 write: IOPS=57, BW=7376KiB/s (7553kB/s)(60.0MiB/8330msec); 0 zone resets 00:12:28.922 slat (usec): min=44, max=1802, avg=145.81, stdev=184.40 00:12:28.922 clat (msec): min=67, max=510, avg=137.43, stdev=67.77 00:12:28.922 lat (msec): min=67, max=510, avg=137.58, stdev=67.78 00:12:28.922 clat percentiles (msec): 00:12:28.922 | 1.00th=[ 79], 5.00th=[ 85], 10.00th=[ 86], 20.00th=[ 88], 00:12:28.922 | 30.00th=[ 91], 40.00th=[ 100], 50.00th=[ 111], 60.00th=[ 126], 00:12:28.922 | 70.00th=[ 150], 80.00th=[ 176], 90.00th=[ 234], 95.00th=[ 275], 00:12:28.922 | 99.00th=[ 401], 99.50th=[ 430], 99.90th=[ 510], 99.95th=[ 510], 00:12:28.922 | 99.99th=[ 510] 00:12:28.922 bw ( KiB/s): min= 512, max=11264, per=0.78%, avg=6515.56, stdev=3457.92, samples=18 00:12:28.922 iops : min= 4, max= 88, avg=50.83, stdev=27.00, samples=18 00:12:28.922 lat (msec) : 10=0.70%, 20=21.74%, 50=16.86%, 100=26.51%, 250=29.07% 00:12:28.922 lat (msec) : 500=5.00%, 750=0.12% 00:12:28.922 cpu : usr=0.36%, sys=0.20%, ctx=1486, majf=0, minf=7 00:12:28.922 IO depths : 1=0.7%, 2=1.4%, 4=2.8%, 8=95.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:28.922 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:28.922 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:28.922 issued rwts: total=380,480,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:28.922 latency : target=0, window=0, percentile=100.00%, depth=8 00:12:28.922 job2: (groupid=0, jobs=1): err= 0: pid=70301: Tue Jul 23 02:07:37 2024 00:12:28.922 read: IOPS=63, BW=8140KiB/s (8335kB/s)(60.0MiB/7548msec) 00:12:28.922 slat (usec): min=7, max=1236, avg=76.29, stdev=132.90 00:12:28.922 clat (msec): min=11, max=147, avg=30.82, stdev=19.40 00:12:28.922 lat (msec): min=11, max=147, avg=30.90, stdev=19.41 00:12:28.922 clat percentiles (msec): 00:12:28.922 | 1.00th=[ 13], 5.00th=[ 15], 10.00th=[ 16], 20.00th=[ 17], 00:12:28.922 | 30.00th=[ 19], 40.00th=[ 23], 50.00th=[ 26], 60.00th=[ 29], 00:12:28.922 | 70.00th=[ 34], 80.00th=[ 42], 90.00th=[ 50], 95.00th=[ 74], 00:12:28.922 | 99.00th=[ 122], 99.50th=[ 136], 99.90th=[ 148], 99.95th=[ 148], 00:12:28.922 | 99.99th=[ 148] 00:12:28.922 write: IOPS=65, BW=8354KiB/s (8554kB/s)(67.0MiB/8213msec); 0 zone resets 00:12:28.922 slat (usec): min=38, max=14003, avg=168.29, stdev=626.36 00:12:28.922 clat (msec): min=58, max=492, avg=121.15, stdev=50.46 00:12:28.922 lat (msec): min=60, max=492, avg=121.32, stdev=50.45 00:12:28.922 clat percentiles (msec): 00:12:28.922 | 1.00th=[ 67], 5.00th=[ 85], 10.00th=[ 86], 20.00th=[ 88], 00:12:28.922 | 30.00th=[ 92], 40.00th=[ 96], 50.00th=[ 103], 60.00th=[ 112], 00:12:28.922 | 70.00th=[ 126], 80.00th=[ 146], 90.00th=[ 180], 95.00th=[ 222], 00:12:28.922 | 99.00th=[ 296], 99.50th=[ 363], 99.90th=[ 493], 99.95th=[ 493], 00:12:28.922 | 99.99th=[ 493] 00:12:28.922 bw ( KiB/s): min= 1788, max=11264, per=0.90%, avg=7523.72, stdev=3018.95, samples=18 00:12:28.922 iops : min= 13, max= 88, avg=58.67, stdev=23.75, samples=18 00:12:28.922 lat (msec) : 20=16.04%, 50=26.57%, 100=28.74%, 250=27.07%, 500=1.57% 00:12:28.922 cpu : usr=0.50%, sys=0.19%, ctx=1623, majf=0, minf=1 00:12:28.922 IO depths : 1=0.7%, 2=1.4%, 4=2.8%, 8=95.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:28.922 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:28.922 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:28.922 issued rwts: total=480,536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:28.922 latency : target=0, window=0, percentile=100.00%, depth=8 00:12:28.922 job3: (groupid=0, jobs=1): err= 0: pid=70311: Tue Jul 23 02:07:37 2024 00:12:28.922 read: IOPS=58, BW=7444KiB/s (7622kB/s)(60.0MiB/8254msec) 00:12:28.922 slat (usec): min=7, max=1052, avg=54.38, stdev=86.31 00:12:28.922 clat (usec): min=8079, max=60471, avg=19551.49, stdev=8428.24 00:12:28.922 lat (usec): min=8094, max=60496, avg=19605.88, stdev=8416.49 00:12:28.922 clat percentiles (usec): 00:12:28.922 | 1.00th=[ 9241], 5.00th=[10159], 10.00th=[10814], 20.00th=[12911], 00:12:28.922 | 30.00th=[14353], 40.00th=[15664], 50.00th=[17171], 60.00th=[19006], 00:12:28.922 | 70.00th=[21627], 80.00th=[25560], 90.00th=[31327], 95.00th=[35914], 00:12:28.922 | 99.00th=[48497], 99.50th=[50594], 99.90th=[60556], 99.95th=[60556], 00:12:28.922 | 99.99th=[60556] 00:12:28.922 write: IOPS=63, BW=8109KiB/s (8304kB/s)(70.6MiB/8918msec); 0 zone resets 00:12:28.922 slat (usec): min=43, max=2786, avg=155.54, stdev=221.69 00:12:28.922 clat (msec): min=3, max=529, avg=125.37, stdev=60.06 00:12:28.922 lat (msec): min=3, max=529, avg=125.52, stdev=60.05 00:12:28.922 clat percentiles (msec): 00:12:28.922 | 1.00th=[ 8], 5.00th=[ 84], 10.00th=[ 86], 20.00th=[ 88], 00:12:28.922 | 30.00th=[ 90], 40.00th=[ 97], 50.00th=[ 107], 60.00th=[ 128], 00:12:28.922 | 70.00th=[ 138], 80.00th=[ 159], 90.00th=[ 197], 95.00th=[ 220], 00:12:28.922 | 99.00th=[ 401], 99.50th=[ 493], 99.90th=[ 531], 99.95th=[ 531], 00:12:28.922 | 99.99th=[ 531] 00:12:28.922 bw ( KiB/s): min= 2560, max=15104, per=0.95%, avg=7919.22, stdev=3122.47, samples=18 00:12:28.922 iops : min= 20, max= 118, avg=61.72, stdev=24.42, samples=18 00:12:28.922 lat (msec) : 4=0.10%, 10=2.11%, 20=29.19%, 50=16.27%, 100=22.20% 00:12:28.922 lat (msec) : 250=28.90%, 500=1.15%, 750=0.10% 00:12:28.922 cpu : usr=0.45%, sys=0.19%, ctx=1717, majf=0, minf=1 00:12:28.922 IO depths : 1=0.7%, 2=1.3%, 4=2.7%, 8=95.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:28.922 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:28.922 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:28.922 issued rwts: total=480,565,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:28.922 latency : target=0, window=0, percentile=100.00%, depth=8 00:12:28.922 job4: (groupid=0, jobs=1): err= 0: pid=70318: Tue Jul 23 02:07:37 2024 00:12:28.922 read: IOPS=59, BW=7654KiB/s (7838kB/s)(60.0MiB/8027msec) 00:12:28.922 slat (usec): min=7, max=1024, avg=62.57, stdev=114.66 00:12:28.922 clat (msec): min=12, max=105, avg=29.27, stdev=12.40 00:12:28.922 lat (msec): min=12, max=105, avg=29.33, stdev=12.40 00:12:28.922 clat percentiles (msec): 00:12:28.922 | 1.00th=[ 14], 5.00th=[ 15], 10.00th=[ 16], 20.00th=[ 19], 00:12:28.922 | 30.00th=[ 23], 40.00th=[ 27], 50.00th=[ 29], 60.00th=[ 30], 00:12:28.922 | 70.00th=[ 32], 80.00th=[ 37], 90.00th=[ 43], 95.00th=[ 53], 00:12:28.922 | 99.00th=[ 80], 99.50th=[ 81], 99.90th=[ 106], 99.95th=[ 106], 00:12:28.922 | 99.99th=[ 106] 00:12:28.922 write: IOPS=64, BW=8251KiB/s (8449kB/s)(66.9MiB/8300msec); 0 zone resets 00:12:28.922 slat (usec): min=38, max=4015, avg=159.06, stdev=254.39 00:12:28.922 clat (msec): min=44, max=470, avg=122.72, stdev=58.00 00:12:28.922 lat (msec): min=44, max=470, avg=122.87, stdev=58.00 00:12:28.922 clat percentiles (msec): 00:12:28.922 | 1.00th=[ 51], 5.00th=[ 85], 10.00th=[ 86], 20.00th=[ 89], 00:12:28.922 | 30.00th=[ 93], 40.00th=[ 100], 50.00th=[ 104], 60.00th=[ 110], 00:12:28.922 | 70.00th=[ 123], 80.00th=[ 140], 90.00th=[ 180], 95.00th=[ 266], 00:12:28.922 | 99.00th=[ 359], 99.50th=[ 397], 99.90th=[ 472], 99.95th=[ 472], 00:12:28.922 | 99.99th=[ 472] 00:12:28.922 bw ( KiB/s): min= 256, max=11497, per=0.85%, avg=7107.84, stdev=3568.33, samples=19 00:12:28.922 iops : min= 2, max= 89, avg=55.32, stdev=27.94, samples=19 00:12:28.922 lat (msec) : 20=11.03%, 50=34.09%, 100=24.83%, 250=27.29%, 500=2.76% 00:12:28.922 cpu : usr=0.45%, sys=0.18%, ctx=1793, majf=0, minf=3 00:12:28.922 IO depths : 1=0.7%, 2=1.4%, 4=2.8%, 8=95.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:28.922 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:28.923 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:28.923 issued rwts: total=480,535,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:28.923 latency : target=0, window=0, percentile=100.00%, depth=8 00:12:28.923 job5: (groupid=0, jobs=1): err= 0: pid=70321: Tue Jul 23 02:07:37 2024 00:12:28.923 read: IOPS=59, BW=7632KiB/s (7815kB/s)(60.0MiB/8050msec) 00:12:28.923 slat (usec): min=7, max=1449, avg=70.83, stdev=123.02 00:12:28.923 clat (usec): min=13229, max=85726, avg=27602.64, stdev=10987.56 00:12:28.923 lat (usec): min=13260, max=85884, avg=27673.47, stdev=10991.48 00:12:28.923 clat percentiles (usec): 00:12:28.923 | 1.00th=[13698], 5.00th=[14877], 10.00th=[15926], 20.00th=[18744], 00:12:28.923 | 30.00th=[20841], 40.00th=[23462], 50.00th=[26084], 60.00th=[27919], 00:12:28.923 | 70.00th=[29754], 80.00th=[33817], 90.00th=[40633], 95.00th=[50594], 00:12:28.923 | 99.00th=[61604], 99.50th=[84411], 99.90th=[85459], 99.95th=[85459], 00:12:28.923 | 99.99th=[85459] 00:12:28.923 write: IOPS=61, BW=7822KiB/s (8010kB/s)(64.1MiB/8395msec); 0 zone resets 00:12:28.923 slat (usec): min=44, max=1579, avg=156.51, stdev=186.02 00:12:28.923 clat (msec): min=6, max=528, avg=129.46, stdev=66.91 00:12:28.923 lat (msec): min=6, max=528, avg=129.61, stdev=66.91 00:12:28.923 clat percentiles (msec): 00:12:28.923 | 1.00th=[ 9], 5.00th=[ 85], 10.00th=[ 87], 20.00th=[ 91], 00:12:28.923 | 30.00th=[ 96], 40.00th=[ 105], 50.00th=[ 114], 60.00th=[ 123], 00:12:28.923 | 70.00th=[ 133], 80.00th=[ 148], 90.00th=[ 184], 95.00th=[ 271], 00:12:28.923 | 99.00th=[ 414], 99.50th=[ 498], 99.90th=[ 527], 99.95th=[ 527], 00:12:28.923 | 99.99th=[ 527] 00:12:28.923 bw ( KiB/s): min= 512, max=13082, per=0.81%, avg=6818.00, stdev=3491.29, samples=19 00:12:28.923 iops : min= 4, max= 102, avg=53.16, stdev=27.32, samples=19 00:12:28.923 lat (msec) : 10=0.70%, 20=12.69%, 50=34.04%, 100=18.13%, 250=31.72% 00:12:28.923 lat (msec) : 500=2.52%, 750=0.20% 00:12:28.923 cpu : usr=0.48%, sys=0.17%, ctx=1745, majf=0, minf=3 00:12:28.923 IO depths : 1=0.7%, 2=1.4%, 4=2.8%, 8=95.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:28.923 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:28.923 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:28.923 issued rwts: total=480,513,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:28.923 latency : target=0, window=0, percentile=100.00%, depth=8 00:12:28.923 job6: (groupid=0, jobs=1): err= 0: pid=70326: Tue Jul 23 02:07:37 2024 00:12:28.923 read: IOPS=57, BW=7375KiB/s (7552kB/s)(60.0MiB/8331msec) 00:12:28.923 slat (usec): min=7, max=1446, avg=68.56, stdev=131.94 00:12:28.923 clat (msec): min=5, max=215, avg=28.25, stdev=25.31 00:12:28.923 lat (msec): min=5, max=215, avg=28.32, stdev=25.32 00:12:28.923 clat percentiles (msec): 00:12:28.923 | 1.00th=[ 9], 5.00th=[ 14], 10.00th=[ 16], 20.00th=[ 17], 00:12:28.923 | 30.00th=[ 18], 40.00th=[ 21], 50.00th=[ 24], 60.00th=[ 27], 00:12:28.923 | 70.00th=[ 30], 80.00th=[ 33], 90.00th=[ 43], 95.00th=[ 51], 00:12:28.923 | 99.00th=[ 205], 99.50th=[ 207], 99.90th=[ 215], 99.95th=[ 215], 00:12:28.923 | 99.99th=[ 215] 00:12:28.923 write: IOPS=65, BW=8382KiB/s (8583kB/s)(68.6MiB/8384msec); 0 zone resets 00:12:28.923 slat (usec): min=40, max=3127, avg=146.47, stdev=210.52 00:12:28.923 clat (usec): min=1357, max=442256, avg=121471.02, stdev=56940.58 00:12:28.923 lat (usec): min=1463, max=442340, avg=121617.49, stdev=56935.17 00:12:28.923 clat percentiles (msec): 00:12:28.923 | 1.00th=[ 8], 5.00th=[ 84], 10.00th=[ 85], 20.00th=[ 88], 00:12:28.923 | 30.00th=[ 92], 40.00th=[ 96], 50.00th=[ 106], 60.00th=[ 114], 00:12:28.923 | 70.00th=[ 130], 80.00th=[ 148], 90.00th=[ 188], 95.00th=[ 230], 00:12:28.923 | 99.00th=[ 372], 99.50th=[ 401], 99.90th=[ 443], 99.95th=[ 443], 00:12:28.923 | 99.99th=[ 443] 00:12:28.923 bw ( KiB/s): min= 512, max=14848, per=0.92%, avg=7691.56, stdev=3512.40, samples=18 00:12:28.923 iops : min= 4, max= 116, avg=59.94, stdev=27.43, samples=18 00:12:28.923 lat (msec) : 2=0.10%, 10=1.46%, 20=17.10%, 50=27.31%, 100=24.30% 00:12:28.923 lat (msec) : 250=27.79%, 500=1.94% 00:12:28.923 cpu : usr=0.48%, sys=0.21%, ctx=1667, majf=0, minf=3 00:12:28.923 IO depths : 1=0.7%, 2=1.4%, 4=2.7%, 8=95.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:28.923 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:28.923 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:28.923 issued rwts: total=480,549,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:28.923 latency : target=0, window=0, percentile=100.00%, depth=8 00:12:28.923 job7: (groupid=0, jobs=1): err= 0: pid=70353: Tue Jul 23 02:07:37 2024 00:12:28.923 read: IOPS=46, BW=5904KiB/s (6045kB/s)(40.0MiB/6938msec) 00:12:28.923 slat (usec): min=7, max=956, avg=76.93, stdev=148.01 00:12:28.923 clat (msec): min=6, max=149, avg=19.83, stdev=21.82 00:12:28.923 lat (msec): min=6, max=149, avg=19.91, stdev=21.80 00:12:28.923 clat percentiles (msec): 00:12:28.923 | 1.00th=[ 8], 5.00th=[ 9], 10.00th=[ 10], 20.00th=[ 11], 00:12:28.923 | 30.00th=[ 12], 40.00th=[ 14], 50.00th=[ 14], 60.00th=[ 16], 00:12:28.923 | 70.00th=[ 18], 80.00th=[ 22], 90.00th=[ 27], 95.00th=[ 37], 00:12:28.923 | 99.00th=[ 140], 99.50th=[ 150], 99.90th=[ 150], 99.95th=[ 150], 00:12:28.923 | 99.99th=[ 150] 00:12:28.923 write: IOPS=51, BW=6550KiB/s (6707kB/s)(59.2MiB/9263msec); 0 zone resets 00:12:28.923 slat (usec): min=43, max=3640, avg=159.27, stdev=273.85 00:12:28.923 clat (msec): min=51, max=444, avg=155.41, stdev=58.68 00:12:28.923 lat (msec): min=51, max=444, avg=155.57, stdev=58.67 00:12:28.923 clat percentiles (msec): 00:12:28.923 | 1.00th=[ 57], 5.00th=[ 85], 10.00th=[ 87], 20.00th=[ 103], 00:12:28.923 | 30.00th=[ 122], 40.00th=[ 134], 50.00th=[ 150], 60.00th=[ 165], 00:12:28.923 | 70.00th=[ 174], 80.00th=[ 199], 90.00th=[ 234], 95.00th=[ 262], 00:12:28.923 | 99.00th=[ 351], 99.50th=[ 397], 99.90th=[ 447], 99.95th=[ 447], 00:12:28.923 | 99.99th=[ 447] 00:12:28.923 bw ( KiB/s): min= 1792, max= 9728, per=0.71%, avg=5972.35, stdev=2385.03, samples=20 00:12:28.923 iops : min= 14, max= 76, avg=46.50, stdev=18.61, samples=20 00:12:28.923 lat (msec) : 10=6.30%, 20=24.06%, 50=8.06%, 100=12.59%, 250=45.47% 00:12:28.923 lat (msec) : 500=3.53% 00:12:28.923 cpu : usr=0.34%, sys=0.19%, ctx=1279, majf=0, minf=9 00:12:28.923 IO depths : 1=0.6%, 2=1.3%, 4=2.5%, 8=95.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:28.923 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:28.923 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:28.923 issued rwts: total=320,474,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:28.923 latency : target=0, window=0, percentile=100.00%, depth=8 00:12:28.923 job8: (groupid=0, jobs=1): err= 0: pid=70388: Tue Jul 23 02:07:37 2024 00:12:28.923 read: IOPS=47, BW=6041KiB/s (6186kB/s)(40.0MiB/6780msec) 00:12:28.923 slat (usec): min=6, max=1183, avg=63.25, stdev=126.63 00:12:28.923 clat (msec): min=6, max=336, avg=27.32, stdev=47.86 00:12:28.923 lat (msec): min=6, max=336, avg=27.38, stdev=47.86 00:12:28.923 clat percentiles (msec): 00:12:28.923 | 1.00th=[ 7], 5.00th=[ 9], 10.00th=[ 11], 20.00th=[ 12], 00:12:28.923 | 30.00th=[ 13], 40.00th=[ 15], 50.00th=[ 17], 60.00th=[ 20], 00:12:28.923 | 70.00th=[ 23], 80.00th=[ 29], 90.00th=[ 36], 95.00th=[ 47], 00:12:28.923 | 99.00th=[ 330], 99.50th=[ 334], 99.90th=[ 338], 99.95th=[ 338], 00:12:28.923 | 99.99th=[ 338] 00:12:28.923 write: IOPS=47, BW=6071KiB/s (6216kB/s)(53.1MiB/8961msec); 0 zone resets 00:12:28.923 slat (usec): min=31, max=2697, avg=174.42, stdev=249.74 00:12:28.923 clat (msec): min=71, max=452, avg=167.93, stdev=68.34 00:12:28.923 lat (msec): min=72, max=452, avg=168.10, stdev=68.32 00:12:28.923 clat percentiles (msec): 00:12:28.923 | 1.00th=[ 84], 5.00th=[ 88], 10.00th=[ 94], 20.00th=[ 105], 00:12:28.923 | 30.00th=[ 121], 40.00th=[ 138], 50.00th=[ 159], 60.00th=[ 171], 00:12:28.923 | 70.00th=[ 192], 80.00th=[ 224], 90.00th=[ 253], 95.00th=[ 284], 00:12:28.923 | 99.00th=[ 397], 99.50th=[ 397], 99.90th=[ 451], 99.95th=[ 451], 00:12:28.923 | 99.99th=[ 451] 00:12:28.923 bw ( KiB/s): min= 1024, max= 9472, per=0.67%, avg=5626.32, stdev=2193.28, samples=19 00:12:28.923 iops : min= 8, max= 74, avg=43.84, stdev=17.16, samples=19 00:12:28.923 lat (msec) : 10=3.49%, 20=22.82%, 50=14.50%, 100=9.80%, 250=42.42% 00:12:28.923 lat (msec) : 500=6.98% 00:12:28.923 cpu : usr=0.29%, sys=0.19%, ctx=1307, majf=0, minf=7 00:12:28.923 IO depths : 1=0.7%, 2=1.3%, 4=2.7%, 8=95.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:28.923 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:28.923 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:28.923 issued rwts: total=320,425,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:28.923 latency : target=0, window=0, percentile=100.00%, depth=8 00:12:28.923 job9: (groupid=0, jobs=1): err= 0: pid=70391: Tue Jul 23 02:07:37 2024 00:12:28.923 read: IOPS=61, BW=7888KiB/s (8077kB/s)(60.0MiB/7789msec) 00:12:28.923 slat (usec): min=6, max=1422, avg=75.24, stdev=120.38 00:12:28.923 clat (msec): min=13, max=101, avg=29.32, stdev=13.89 00:12:28.923 lat (msec): min=13, max=101, avg=29.40, stdev=13.88 00:12:28.923 clat percentiles (msec): 00:12:28.923 | 1.00th=[ 14], 5.00th=[ 17], 10.00th=[ 18], 20.00th=[ 20], 00:12:28.923 | 30.00th=[ 22], 40.00th=[ 24], 50.00th=[ 26], 60.00th=[ 28], 00:12:28.923 | 70.00th=[ 31], 80.00th=[ 35], 90.00th=[ 45], 95.00th=[ 56], 00:12:28.923 | 99.00th=[ 91], 99.50th=[ 94], 99.90th=[ 102], 99.95th=[ 102], 00:12:28.923 | 99.99th=[ 102] 00:12:28.924 write: IOPS=61, BW=7887KiB/s (8076kB/s)(63.8MiB/8277msec); 0 zone resets 00:12:28.924 slat (usec): min=44, max=1820, avg=142.33, stdev=178.25 00:12:28.924 clat (msec): min=82, max=542, avg=128.31, stdev=66.94 00:12:28.924 lat (msec): min=82, max=542, avg=128.46, stdev=66.94 00:12:28.924 clat percentiles (msec): 00:12:28.924 | 1.00th=[ 84], 5.00th=[ 85], 10.00th=[ 86], 20.00th=[ 89], 00:12:28.924 | 30.00th=[ 92], 40.00th=[ 97], 50.00th=[ 104], 60.00th=[ 114], 00:12:28.924 | 70.00th=[ 131], 80.00th=[ 146], 90.00th=[ 190], 95.00th=[ 284], 00:12:28.924 | 99.00th=[ 443], 99.50th=[ 456], 99.90th=[ 542], 99.95th=[ 542], 00:12:28.924 | 99.99th=[ 542] 00:12:28.924 bw ( KiB/s): min= 763, max=11912, per=0.86%, avg=7207.65, stdev=3495.57, samples=17 00:12:28.924 iops : min= 5, max= 93, avg=55.76, stdev=27.53, samples=17 00:12:28.924 lat (msec) : 20=10.81%, 50=34.55%, 100=25.86%, 250=25.56%, 500=3.13% 00:12:28.924 lat (msec) : 750=0.10% 00:12:28.924 cpu : usr=0.37%, sys=0.25%, ctx=1710, majf=0, minf=5 00:12:28.924 IO depths : 1=0.7%, 2=1.4%, 4=2.8%, 8=95.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:28.924 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:28.924 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:28.924 issued rwts: total=480,510,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:28.924 latency : target=0, window=0, percentile=100.00%, depth=8 00:12:28.924 job10: (groupid=0, jobs=1): err= 0: pid=70507: Tue Jul 23 02:07:37 2024 00:12:28.924 read: IOPS=73, BW=9357KiB/s (9582kB/s)(80.0MiB/8755msec) 00:12:28.924 slat (usec): min=7, max=1041, avg=65.38, stdev=118.09 00:12:28.924 clat (usec): min=5941, max=76124, avg=14297.19, stdev=8550.05 00:12:28.924 lat (usec): min=5972, max=76173, avg=14362.57, stdev=8551.90 00:12:28.924 clat percentiles (usec): 00:12:28.924 | 1.00th=[ 6587], 5.00th=[ 7439], 10.00th=[ 7898], 20.00th=[ 8979], 00:12:28.924 | 30.00th=[10290], 40.00th=[11338], 50.00th=[12387], 60.00th=[13566], 00:12:28.924 | 70.00th=[15270], 80.00th=[16712], 90.00th=[20579], 95.00th=[26084], 00:12:28.924 | 99.00th=[65274], 99.50th=[72877], 99.90th=[76022], 99.95th=[76022], 00:12:28.924 | 99.99th=[76022] 00:12:28.924 write: IOPS=90, BW=11.3MiB/s (11.8MB/s)(100MiB/8885msec); 0 zone resets 00:12:28.924 slat (usec): min=36, max=32469, avg=218.37, stdev=1458.42 00:12:28.924 clat (msec): min=3, max=318, avg=87.98, stdev=41.77 00:12:28.924 lat (msec): min=3, max=318, avg=88.20, stdev=41.84 00:12:28.924 clat percentiles (msec): 00:12:28.924 | 1.00th=[ 6], 5.00th=[ 57], 10.00th=[ 60], 20.00th=[ 63], 00:12:28.924 | 30.00th=[ 67], 40.00th=[ 72], 50.00th=[ 77], 60.00th=[ 82], 00:12:28.924 | 70.00th=[ 92], 80.00th=[ 106], 90.00th=[ 134], 95.00th=[ 169], 00:12:28.924 | 99.00th=[ 259], 99.50th=[ 284], 99.90th=[ 317], 99.95th=[ 317], 00:12:28.924 | 99.99th=[ 317] 00:12:28.924 bw ( KiB/s): min= 2816, max=20736, per=1.21%, avg=10146.50, stdev=4619.89, samples=20 00:12:28.924 iops : min= 22, max= 162, avg=79.10, stdev=36.13, samples=20 00:12:28.924 lat (msec) : 4=0.35%, 10=13.40%, 20=26.60%, 50=5.62%, 100=40.35% 00:12:28.924 lat (msec) : 250=12.85%, 500=0.83% 00:12:28.924 cpu : usr=0.76%, sys=0.22%, ctx=2390, majf=0, minf=3 00:12:28.924 IO depths : 1=0.6%, 2=1.2%, 4=2.5%, 8=95.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:28.924 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:28.924 complete : 0=0.0%, 4=99.4%, 8=0.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:28.924 issued rwts: total=640,800,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:28.924 latency : target=0, window=0, percentile=100.00%, depth=8 00:12:28.924 job11: (groupid=0, jobs=1): err= 0: pid=70643: Tue Jul 23 02:07:37 2024 00:12:28.924 read: IOPS=83, BW=10.4MiB/s (10.9MB/s)(80.0MiB/7698msec) 00:12:28.924 slat (usec): min=7, max=3977, avg=56.46, stdev=177.06 00:12:28.924 clat (msec): min=3, max=146, avg=18.16, stdev=23.76 00:12:28.924 lat (msec): min=3, max=146, avg=18.22, stdev=23.76 00:12:28.924 clat percentiles (msec): 00:12:28.924 | 1.00th=[ 6], 5.00th=[ 6], 10.00th=[ 7], 20.00th=[ 8], 00:12:28.924 | 30.00th=[ 10], 40.00th=[ 11], 50.00th=[ 12], 60.00th=[ 13], 00:12:28.924 | 70.00th=[ 14], 80.00th=[ 18], 90.00th=[ 35], 95.00th=[ 57], 00:12:28.924 | 99.00th=[ 146], 99.50th=[ 146], 99.90th=[ 146], 99.95th=[ 146], 00:12:28.924 | 99.99th=[ 146] 00:12:28.924 write: IOPS=75, BW=9708KiB/s (9941kB/s)(81.4MiB/8583msec); 0 zone resets 00:12:28.924 slat (usec): min=33, max=7221, avg=161.77, stdev=351.31 00:12:28.924 clat (msec): min=29, max=260, avg=104.73, stdev=41.12 00:12:28.924 lat (msec): min=30, max=260, avg=104.89, stdev=41.10 00:12:28.924 clat percentiles (msec): 00:12:28.924 | 1.00th=[ 36], 5.00th=[ 61], 10.00th=[ 62], 20.00th=[ 71], 00:12:28.924 | 30.00th=[ 79], 40.00th=[ 88], 50.00th=[ 97], 60.00th=[ 107], 00:12:28.924 | 70.00th=[ 115], 80.00th=[ 130], 90.00th=[ 161], 95.00th=[ 199], 00:12:28.924 | 99.00th=[ 230], 99.50th=[ 241], 99.90th=[ 262], 99.95th=[ 262], 00:12:28.924 | 99.99th=[ 262] 00:12:28.924 bw ( KiB/s): min= 1788, max=14336, per=0.98%, avg=8240.15, stdev=3887.34, samples=20 00:12:28.924 iops : min= 13, max= 112, avg=64.25, stdev=30.39, samples=20 00:12:28.924 lat (msec) : 4=0.08%, 10=19.29%, 20=21.92%, 50=5.50%, 100=28.43% 00:12:28.924 lat (msec) : 250=24.63%, 500=0.15% 00:12:28.924 cpu : usr=0.61%, sys=0.29%, ctx=2072, majf=0, minf=1 00:12:28.924 IO depths : 1=0.7%, 2=1.4%, 4=2.8%, 8=95.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:28.924 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:28.924 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:28.924 issued rwts: total=640,651,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:28.924 latency : target=0, window=0, percentile=100.00%, depth=8 00:12:28.924 job12: (groupid=0, jobs=1): err= 0: pid=70657: Tue Jul 23 02:07:37 2024 00:12:28.924 read: IOPS=80, BW=10.0MiB/s (10.5MB/s)(89.8MiB/8936msec) 00:12:28.924 slat (usec): min=7, max=810, avg=55.50, stdev=94.38 00:12:28.924 clat (usec): min=3828, max=58972, avg=13161.25, stdev=8583.63 00:12:28.924 lat (usec): min=3953, max=59101, avg=13216.75, stdev=8594.70 00:12:28.924 clat percentiles (usec): 00:12:28.924 | 1.00th=[ 5145], 5.00th=[ 5669], 10.00th=[ 6128], 20.00th=[ 7308], 00:12:28.924 | 30.00th=[ 8717], 40.00th=[ 9765], 50.00th=[11076], 60.00th=[12649], 00:12:28.924 | 70.00th=[13829], 80.00th=[16188], 90.00th=[21627], 95.00th=[27919], 00:12:28.924 | 99.00th=[54789], 99.50th=[56886], 99.90th=[58983], 99.95th=[58983], 00:12:28.924 | 99.99th=[58983] 00:12:28.924 write: IOPS=90, BW=11.3MiB/s (11.9MB/s)(100MiB/8813msec); 0 zone resets 00:12:28.924 slat (usec): min=35, max=2941, avg=160.01, stdev=206.03 00:12:28.924 clat (usec): min=1711, max=269121, avg=87504.44, stdev=42960.25 00:12:28.924 lat (usec): min=1777, max=269327, avg=87664.45, stdev=42986.75 00:12:28.924 clat percentiles (msec): 00:12:28.924 | 1.00th=[ 5], 5.00th=[ 16], 10.00th=[ 60], 20.00th=[ 64], 00:12:28.924 | 30.00th=[ 68], 40.00th=[ 72], 50.00th=[ 78], 60.00th=[ 83], 00:12:28.924 | 70.00th=[ 93], 80.00th=[ 106], 90.00th=[ 138], 95.00th=[ 178], 00:12:28.924 | 99.00th=[ 247], 99.50th=[ 255], 99.90th=[ 271], 99.95th=[ 271], 00:12:28.924 | 99.99th=[ 271] 00:12:28.924 bw ( KiB/s): min= 3584, max=24320, per=1.22%, avg=10237.60, stdev=5093.27, samples=20 00:12:28.924 iops : min= 28, max= 190, avg=79.85, stdev=39.87, samples=20 00:12:28.924 lat (msec) : 2=0.13%, 4=0.46%, 10=21.48%, 20=21.74%, 50=5.40% 00:12:28.924 lat (msec) : 100=38.08%, 250=12.25%, 500=0.46% 00:12:28.924 cpu : usr=0.79%, sys=0.25%, ctx=2487, majf=0, minf=1 00:12:28.924 IO depths : 1=0.7%, 2=1.3%, 4=2.6%, 8=95.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:28.924 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:28.924 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:28.924 issued rwts: total=718,800,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:28.924 latency : target=0, window=0, percentile=100.00%, depth=8 00:12:28.924 job13: (groupid=0, jobs=1): err= 0: pid=70794: Tue Jul 23 02:07:37 2024 00:12:28.924 read: IOPS=75, BW=9709KiB/s (9942kB/s)(81.0MiB/8543msec) 00:12:28.924 slat (usec): min=7, max=1365, avg=69.12, stdev=141.61 00:12:28.924 clat (usec): min=6151, max=34673, avg=13762.75, stdev=4640.77 00:12:28.924 lat (usec): min=6498, max=34700, avg=13831.87, stdev=4643.09 00:12:28.924 clat percentiles (usec): 00:12:28.924 | 1.00th=[ 7177], 5.00th=[ 8160], 10.00th=[ 8586], 20.00th=[ 9896], 00:12:28.924 | 30.00th=[11076], 40.00th=[11863], 50.00th=[12911], 60.00th=[13960], 00:12:28.924 | 70.00th=[15401], 80.00th=[16909], 90.00th=[19530], 95.00th=[22676], 00:12:28.924 | 99.00th=[30802], 99.50th=[34341], 99.90th=[34866], 99.95th=[34866], 00:12:28.924 | 99.99th=[34866] 00:12:28.924 write: IOPS=90, BW=11.3MiB/s (11.8MB/s)(100MiB/8876msec); 0 zone resets 00:12:28.924 slat (usec): min=34, max=2528, avg=157.03, stdev=223.51 00:12:28.924 clat (msec): min=6, max=282, avg=87.94, stdev=37.93 00:12:28.924 lat (msec): min=6, max=282, avg=88.10, stdev=37.93 00:12:28.924 clat percentiles (msec): 00:12:28.924 | 1.00th=[ 15], 5.00th=[ 60], 10.00th=[ 62], 20.00th=[ 65], 00:12:28.924 | 30.00th=[ 69], 40.00th=[ 73], 50.00th=[ 78], 60.00th=[ 83], 00:12:28.924 | 70.00th=[ 88], 80.00th=[ 97], 90.00th=[ 134], 95.00th=[ 174], 00:12:28.924 | 99.00th=[ 251], 99.50th=[ 275], 99.90th=[ 284], 99.95th=[ 284], 00:12:28.924 | 99.99th=[ 284] 00:12:28.924 bw ( KiB/s): min= 1792, max=16384, per=1.22%, avg=10209.32, stdev=4397.91, samples=19 00:12:28.924 iops : min= 14, max= 128, avg=79.63, stdev=34.33, samples=19 00:12:28.924 lat (msec) : 10=9.53%, 20=32.04%, 50=4.28%, 100=43.65%, 250=9.88% 00:12:28.924 lat (msec) : 500=0.62% 00:12:28.924 cpu : usr=0.59%, sys=0.35%, ctx=2382, majf=0, minf=3 00:12:28.924 IO depths : 1=0.7%, 2=1.4%, 4=2.8%, 8=95.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:28.924 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:28.924 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:28.924 issued rwts: total=648,800,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:28.924 latency : target=0, window=0, percentile=100.00%, depth=8 00:12:28.924 job14: (groupid=0, jobs=1): err= 0: pid=70953: Tue Jul 23 02:07:37 2024 00:12:28.924 read: IOPS=79, BW=9.99MiB/s (10.5MB/s)(80.0MiB/8005msec) 00:12:28.924 slat (usec): min=7, max=1753, avg=58.50, stdev=124.55 00:12:28.925 clat (msec): min=3, max=200, avg=20.94, stdev=25.49 00:12:28.925 lat (msec): min=3, max=200, avg=21.00, stdev=25.49 00:12:28.925 clat percentiles (msec): 00:12:28.925 | 1.00th=[ 6], 5.00th=[ 9], 10.00th=[ 9], 20.00th=[ 10], 00:12:28.925 | 30.00th=[ 12], 40.00th=[ 14], 50.00th=[ 16], 60.00th=[ 18], 00:12:28.925 | 70.00th=[ 20], 80.00th=[ 22], 90.00th=[ 29], 95.00th=[ 50], 00:12:28.925 | 99.00th=[ 176], 99.50th=[ 190], 99.90th=[ 201], 99.95th=[ 201], 00:12:28.925 | 99.99th=[ 201] 00:12:28.925 write: IOPS=80, BW=10.0MiB/s (10.5MB/s)(83.5MiB/8341msec); 0 zone resets 00:12:28.925 slat (usec): min=33, max=2874, avg=156.28, stdev=213.51 00:12:28.925 clat (msec): min=37, max=440, avg=98.98, stdev=44.93 00:12:28.925 lat (msec): min=37, max=440, avg=99.14, stdev=44.92 00:12:28.925 clat percentiles (msec): 00:12:28.925 | 1.00th=[ 58], 5.00th=[ 60], 10.00th=[ 62], 20.00th=[ 68], 00:12:28.925 | 30.00th=[ 72], 40.00th=[ 78], 50.00th=[ 87], 60.00th=[ 96], 00:12:28.925 | 70.00th=[ 111], 80.00th=[ 125], 90.00th=[ 146], 95.00th=[ 165], 00:12:28.925 | 99.00th=[ 275], 99.50th=[ 372], 99.90th=[ 443], 99.95th=[ 443], 00:12:28.925 | 99.99th=[ 443] 00:12:28.925 bw ( KiB/s): min= 1770, max=15461, per=1.08%, avg=9023.61, stdev=3829.91, samples=18 00:12:28.925 iops : min= 13, max= 120, avg=70.00, stdev=30.02, samples=18 00:12:28.925 lat (msec) : 4=0.08%, 10=10.09%, 20=25.15%, 50=11.47%, 100=32.87% 00:12:28.925 lat (msec) : 250=19.65%, 500=0.69% 00:12:28.925 cpu : usr=0.59%, sys=0.27%, ctx=2243, majf=0, minf=3 00:12:28.925 IO depths : 1=0.7%, 2=1.4%, 4=2.8%, 8=95.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:28.925 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:28.925 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:28.925 issued rwts: total=640,668,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:28.925 latency : target=0, window=0, percentile=100.00%, depth=8 00:12:28.925 job15: (groupid=0, jobs=1): err= 0: pid=70955: Tue Jul 23 02:07:37 2024 00:12:28.925 read: IOPS=73, BW=9390KiB/s (9616kB/s)(80.0MiB/8724msec) 00:12:28.925 slat (usec): min=7, max=1080, avg=61.22, stdev=108.52 00:12:28.925 clat (usec): min=6242, max=69515, avg=15124.02, stdev=8598.99 00:12:28.925 lat (usec): min=6269, max=69527, avg=15185.24, stdev=8588.58 00:12:28.925 clat percentiles (usec): 00:12:28.925 | 1.00th=[ 7111], 5.00th=[ 8455], 10.00th=[ 9110], 20.00th=[10683], 00:12:28.925 | 30.00th=[11207], 40.00th=[11731], 50.00th=[12780], 60.00th=[13960], 00:12:28.925 | 70.00th=[15270], 80.00th=[17957], 90.00th=[21890], 95.00th=[27919], 00:12:28.925 | 99.00th=[59507], 99.50th=[65274], 99.90th=[69731], 99.95th=[69731], 00:12:28.925 | 99.99th=[69731] 00:12:28.925 write: IOPS=88, BW=11.0MiB/s (11.6MB/s)(97.5MiB/8828msec); 0 zone resets 00:12:28.925 slat (usec): min=38, max=7780, avg=160.86, stdev=344.12 00:12:28.925 clat (msec): min=15, max=443, avg=89.75, stdev=47.33 00:12:28.925 lat (msec): min=16, max=443, avg=89.91, stdev=47.31 00:12:28.925 clat percentiles (msec): 00:12:28.925 | 1.00th=[ 46], 5.00th=[ 59], 10.00th=[ 61], 20.00th=[ 62], 00:12:28.925 | 30.00th=[ 65], 40.00th=[ 68], 50.00th=[ 73], 60.00th=[ 82], 00:12:28.925 | 70.00th=[ 91], 80.00th=[ 106], 90.00th=[ 142], 95.00th=[ 167], 00:12:28.925 | 99.00th=[ 296], 99.50th=[ 355], 99.90th=[ 443], 99.95th=[ 443], 00:12:28.925 | 99.99th=[ 443] 00:12:28.925 bw ( KiB/s): min= 2043, max=16384, per=1.18%, avg=9875.90, stdev=4666.45, samples=20 00:12:28.925 iops : min= 15, max= 128, avg=77.00, stdev=36.49, samples=20 00:12:28.925 lat (msec) : 10=7.46%, 20=32.32%, 50=5.00%, 100=42.68%, 250=11.48% 00:12:28.925 lat (msec) : 500=1.06% 00:12:28.925 cpu : usr=0.64%, sys=0.35%, ctx=2392, majf=0, minf=1 00:12:28.925 IO depths : 1=0.6%, 2=1.3%, 4=2.5%, 8=95.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:28.925 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:28.925 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:28.925 issued rwts: total=640,780,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:28.925 latency : target=0, window=0, percentile=100.00%, depth=8 00:12:28.925 job16: (groupid=0, jobs=1): err= 0: pid=70956: Tue Jul 23 02:07:37 2024 00:12:28.925 read: IOPS=76, BW=9799KiB/s (10.0MB/s)(80.0MiB/8360msec) 00:12:28.925 slat (usec): min=6, max=915, avg=62.81, stdev=110.26 00:12:28.925 clat (msec): min=4, max=283, avg=26.27, stdev=35.56 00:12:28.925 lat (msec): min=4, max=283, avg=26.34, stdev=35.55 00:12:28.925 clat percentiles (msec): 00:12:28.925 | 1.00th=[ 6], 5.00th=[ 8], 10.00th=[ 9], 20.00th=[ 11], 00:12:28.925 | 30.00th=[ 12], 40.00th=[ 15], 50.00th=[ 17], 60.00th=[ 19], 00:12:28.925 | 70.00th=[ 21], 80.00th=[ 29], 90.00th=[ 51], 95.00th=[ 88], 00:12:28.925 | 99.00th=[ 201], 99.50th=[ 268], 99.90th=[ 284], 99.95th=[ 284], 00:12:28.925 | 99.99th=[ 284] 00:12:28.925 write: IOPS=88, BW=11.0MiB/s (11.5MB/s)(87.1MiB/7910msec); 0 zone resets 00:12:28.925 slat (usec): min=44, max=13240, avg=169.70, stdev=526.70 00:12:28.925 clat (msec): min=19, max=231, avg=89.86, stdev=32.77 00:12:28.925 lat (msec): min=19, max=231, avg=90.03, stdev=32.76 00:12:28.925 clat percentiles (msec): 00:12:28.925 | 1.00th=[ 58], 5.00th=[ 60], 10.00th=[ 61], 20.00th=[ 64], 00:12:28.925 | 30.00th=[ 67], 40.00th=[ 72], 50.00th=[ 81], 60.00th=[ 90], 00:12:28.925 | 70.00th=[ 100], 80.00th=[ 112], 90.00th=[ 131], 95.00th=[ 161], 00:12:28.925 | 99.00th=[ 207], 99.50th=[ 220], 99.90th=[ 232], 99.95th=[ 232], 00:12:28.925 | 99.99th=[ 232] 00:12:28.925 bw ( KiB/s): min= 1280, max=15073, per=1.06%, avg=8829.30, stdev=4719.51, samples=20 00:12:28.925 iops : min= 10, max= 117, avg=68.85, stdev=36.86, samples=20 00:12:28.925 lat (msec) : 10=9.05%, 20=23.71%, 50=10.32%, 100=39.87%, 250=16.60% 00:12:28.925 lat (msec) : 500=0.45% 00:12:28.925 cpu : usr=0.63%, sys=0.27%, ctx=2249, majf=0, minf=3 00:12:28.925 IO depths : 1=0.7%, 2=1.3%, 4=2.7%, 8=95.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:28.925 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:28.925 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:28.925 issued rwts: total=640,697,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:28.925 latency : target=0, window=0, percentile=100.00%, depth=8 00:12:28.925 job17: (groupid=0, jobs=1): err= 0: pid=70957: Tue Jul 23 02:07:37 2024 00:12:28.925 read: IOPS=75, BW=9654KiB/s (9885kB/s)(80.0MiB/8486msec) 00:12:28.925 slat (usec): min=7, max=2498, avg=68.38, stdev=180.90 00:12:28.925 clat (usec): min=5017, max=92494, avg=19849.72, stdev=15222.46 00:12:28.925 lat (usec): min=5183, max=92691, avg=19918.09, stdev=15215.65 00:12:28.925 clat percentiles (usec): 00:12:28.925 | 1.00th=[ 5932], 5.00th=[ 6652], 10.00th=[ 7504], 20.00th=[ 8717], 00:12:28.925 | 30.00th=[ 9896], 40.00th=[12518], 50.00th=[15401], 60.00th=[17957], 00:12:28.925 | 70.00th=[20841], 80.00th=[26346], 90.00th=[37487], 95.00th=[54264], 00:12:28.925 | 99.00th=[74974], 99.50th=[76022], 99.90th=[92799], 99.95th=[92799], 00:12:28.925 | 99.99th=[92799] 00:12:28.925 write: IOPS=92, BW=11.6MiB/s (12.2MB/s)(97.8MiB/8435msec); 0 zone resets 00:12:28.925 slat (usec): min=38, max=2308, avg=151.45, stdev=170.74 00:12:28.925 clat (msec): min=46, max=313, avg=85.55, stdev=35.81 00:12:28.925 lat (msec): min=46, max=313, avg=85.70, stdev=35.81 00:12:28.925 clat percentiles (msec): 00:12:28.925 | 1.00th=[ 53], 5.00th=[ 59], 10.00th=[ 61], 20.00th=[ 63], 00:12:28.925 | 30.00th=[ 66], 40.00th=[ 71], 50.00th=[ 75], 60.00th=[ 80], 00:12:28.925 | 70.00th=[ 87], 80.00th=[ 99], 90.00th=[ 126], 95.00th=[ 153], 00:12:28.925 | 99.00th=[ 271], 99.50th=[ 288], 99.90th=[ 313], 99.95th=[ 313], 00:12:28.925 | 99.99th=[ 313] 00:12:28.925 bw ( KiB/s): min= 1536, max=15360, per=1.18%, avg=9896.00, stdev=4766.67, samples=20 00:12:28.925 iops : min= 12, max= 120, avg=77.10, stdev=37.27, samples=20 00:12:28.925 lat (msec) : 10=13.64%, 20=16.24%, 50=12.24%, 100=47.54%, 250=9.77% 00:12:28.925 lat (msec) : 500=0.56% 00:12:28.925 cpu : usr=0.65%, sys=0.31%, ctx=2450, majf=0, minf=1 00:12:28.925 IO depths : 1=0.6%, 2=1.3%, 4=2.5%, 8=95.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:28.925 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:28.925 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:28.925 issued rwts: total=640,782,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:28.925 latency : target=0, window=0, percentile=100.00%, depth=8 00:12:28.925 job18: (groupid=0, jobs=1): err= 0: pid=70958: Tue Jul 23 02:07:37 2024 00:12:28.925 read: IOPS=76, BW=9790KiB/s (10.0MB/s)(80.0MiB/8368msec) 00:12:28.925 slat (usec): min=6, max=2447, avg=76.73, stdev=173.82 00:12:28.925 clat (msec): min=4, max=151, avg=18.18, stdev=17.76 00:12:28.925 lat (msec): min=4, max=151, avg=18.25, stdev=17.76 00:12:28.925 clat percentiles (msec): 00:12:28.925 | 1.00th=[ 6], 5.00th=[ 7], 10.00th=[ 8], 20.00th=[ 10], 00:12:28.925 | 30.00th=[ 11], 40.00th=[ 12], 50.00th=[ 14], 60.00th=[ 16], 00:12:28.925 | 70.00th=[ 19], 80.00th=[ 23], 90.00th=[ 33], 95.00th=[ 41], 00:12:28.925 | 99.00th=[ 148], 99.50th=[ 150], 99.90th=[ 153], 99.95th=[ 153], 00:12:28.925 | 99.99th=[ 153] 00:12:28.925 write: IOPS=81, BW=10.2MiB/s (10.7MB/s)(87.4MiB/8567msec); 0 zone resets 00:12:28.925 slat (usec): min=44, max=3261, avg=176.27, stdev=263.68 00:12:28.925 clat (msec): min=42, max=342, avg=97.15, stdev=39.49 00:12:28.925 lat (msec): min=42, max=342, avg=97.33, stdev=39.49 00:12:28.925 clat percentiles (msec): 00:12:28.925 | 1.00th=[ 58], 5.00th=[ 60], 10.00th=[ 62], 20.00th=[ 67], 00:12:28.925 | 30.00th=[ 71], 40.00th=[ 79], 50.00th=[ 88], 60.00th=[ 95], 00:12:28.925 | 70.00th=[ 107], 80.00th=[ 122], 90.00th=[ 144], 95.00th=[ 171], 00:12:28.925 | 99.00th=[ 264], 99.50th=[ 279], 99.90th=[ 342], 99.95th=[ 342], 00:12:28.925 | 99.99th=[ 342] 00:12:28.925 bw ( KiB/s): min= 2048, max=14592, per=1.06%, avg=8850.90, stdev=4063.40, samples=20 00:12:28.925 iops : min= 16, max= 114, avg=69.05, stdev=31.67, samples=20 00:12:28.925 lat (msec) : 10=13.14%, 20=22.85%, 50=10.53%, 100=34.95%, 250=17.92% 00:12:28.925 lat (msec) : 500=0.60% 00:12:28.925 cpu : usr=0.65%, sys=0.25%, ctx=2246, majf=0, minf=5 00:12:28.925 IO depths : 1=0.7%, 2=1.3%, 4=2.7%, 8=95.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:28.925 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:28.926 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:28.926 issued rwts: total=640,699,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:28.926 latency : target=0, window=0, percentile=100.00%, depth=8 00:12:28.926 job19: (groupid=0, jobs=1): err= 0: pid=70961: Tue Jul 23 02:07:37 2024 00:12:28.926 read: IOPS=91, BW=11.5MiB/s (12.0MB/s)(100MiB/8731msec) 00:12:28.926 slat (usec): min=5, max=1063, avg=53.18, stdev=94.01 00:12:28.926 clat (usec): min=5376, max=45296, avg=13091.03, stdev=5200.71 00:12:28.926 lat (usec): min=5570, max=45308, avg=13144.21, stdev=5201.31 00:12:28.926 clat percentiles (usec): 00:12:28.926 | 1.00th=[ 6063], 5.00th=[ 7177], 10.00th=[ 7767], 20.00th=[ 8979], 00:12:28.926 | 30.00th=[10159], 40.00th=[11469], 50.00th=[12518], 60.00th=[13435], 00:12:28.926 | 70.00th=[14353], 80.00th=[15664], 90.00th=[18220], 95.00th=[22414], 00:12:28.926 | 99.00th=[33162], 99.50th=[38011], 99.90th=[45351], 99.95th=[45351], 00:12:28.926 | 99.99th=[45351] 00:12:28.926 write: IOPS=92, BW=11.5MiB/s (12.1MB/s)(100MiB/8672msec); 0 zone resets 00:12:28.926 slat (usec): min=29, max=8083, avg=141.09, stdev=319.32 00:12:28.926 clat (msec): min=22, max=281, avg=86.04, stdev=37.32 00:12:28.926 lat (msec): min=23, max=281, avg=86.18, stdev=37.31 00:12:28.926 clat percentiles (msec): 00:12:28.926 | 1.00th=[ 30], 5.00th=[ 59], 10.00th=[ 60], 20.00th=[ 62], 00:12:28.926 | 30.00th=[ 64], 40.00th=[ 69], 50.00th=[ 73], 60.00th=[ 80], 00:12:28.926 | 70.00th=[ 87], 80.00th=[ 104], 90.00th=[ 134], 95.00th=[ 161], 00:12:28.926 | 99.00th=[ 239], 99.50th=[ 255], 99.90th=[ 284], 99.95th=[ 284], 00:12:28.926 | 99.99th=[ 284] 00:12:28.926 bw ( KiB/s): min= 2560, max=16640, per=1.24%, avg=10413.37, stdev=4483.03, samples=19 00:12:28.926 iops : min= 20, max= 130, avg=81.21, stdev=35.01, samples=19 00:12:28.926 lat (msec) : 10=13.81%, 20=32.75%, 50=3.94%, 100=38.56%, 250=10.56% 00:12:28.926 lat (msec) : 500=0.38% 00:12:28.926 cpu : usr=0.65%, sys=0.33%, ctx=2721, majf=0, minf=3 00:12:28.926 IO depths : 1=0.6%, 2=1.2%, 4=2.5%, 8=95.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:28.926 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:28.926 complete : 0=0.0%, 4=99.4%, 8=0.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:28.926 issued rwts: total=800,800,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:28.926 latency : target=0, window=0, percentile=100.00%, depth=8 00:12:28.926 job20: (groupid=0, jobs=1): err= 0: pid=70966: Tue Jul 23 02:07:37 2024 00:12:28.926 read: IOPS=76, BW=9805KiB/s (10.0MB/s)(80.0MiB/8355msec) 00:12:28.926 slat (usec): min=7, max=1566, avg=68.91, stdev=145.72 00:12:28.926 clat (usec): min=5726, max=67423, avg=15126.66, stdev=8086.95 00:12:28.926 lat (usec): min=5740, max=67618, avg=15195.57, stdev=8088.69 00:12:28.926 clat percentiles (usec): 00:12:28.926 | 1.00th=[ 5997], 5.00th=[ 6390], 10.00th=[ 6980], 20.00th=[ 8586], 00:12:28.926 | 30.00th=[10552], 40.00th=[11731], 50.00th=[13304], 60.00th=[14615], 00:12:28.926 | 70.00th=[17171], 80.00th=[20055], 90.00th=[23200], 95.00th=[28705], 00:12:28.926 | 99.00th=[45351], 99.50th=[50594], 99.90th=[67634], 99.95th=[67634], 00:12:28.926 | 99.99th=[67634] 00:12:28.926 write: IOPS=86, BW=10.8MiB/s (11.3MB/s)(95.0MiB/8832msec); 0 zone resets 00:12:28.926 slat (usec): min=39, max=2027, avg=138.13, stdev=187.35 00:12:28.926 clat (msec): min=36, max=324, avg=92.05, stdev=39.75 00:12:28.926 lat (msec): min=37, max=324, avg=92.19, stdev=39.77 00:12:28.926 clat percentiles (msec): 00:12:28.926 | 1.00th=[ 52], 5.00th=[ 59], 10.00th=[ 61], 20.00th=[ 66], 00:12:28.926 | 30.00th=[ 69], 40.00th=[ 73], 50.00th=[ 78], 60.00th=[ 86], 00:12:28.926 | 70.00th=[ 95], 80.00th=[ 113], 90.00th=[ 148], 95.00th=[ 178], 00:12:28.926 | 99.00th=[ 251], 99.50th=[ 264], 99.90th=[ 326], 99.95th=[ 326], 00:12:28.926 | 99.99th=[ 326] 00:12:28.926 bw ( KiB/s): min= 2052, max=15872, per=1.15%, avg=9635.80, stdev=4217.02, samples=20 00:12:28.926 iops : min= 16, max= 124, avg=75.20, stdev=32.93, samples=20 00:12:28.926 lat (msec) : 10=12.36%, 20=24.21%, 50=9.36%, 100=39.79%, 250=13.71% 00:12:28.926 lat (msec) : 500=0.57% 00:12:28.926 cpu : usr=0.55%, sys=0.33%, ctx=2288, majf=0, minf=3 00:12:28.926 IO depths : 1=0.6%, 2=1.3%, 4=2.6%, 8=95.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:28.926 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:28.926 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:28.926 issued rwts: total=640,760,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:28.926 latency : target=0, window=0, percentile=100.00%, depth=8 00:12:28.926 job21: (groupid=0, jobs=1): err= 0: pid=70967: Tue Jul 23 02:07:37 2024 00:12:28.926 read: IOPS=75, BW=9671KiB/s (9903kB/s)(80.0MiB/8471msec) 00:12:28.926 slat (usec): min=7, max=1290, avg=64.15, stdev=128.51 00:12:28.926 clat (msec): min=6, max=111, avg=18.47, stdev=11.05 00:12:28.926 lat (msec): min=6, max=111, avg=18.53, stdev=11.04 00:12:28.926 clat percentiles (msec): 00:12:28.926 | 1.00th=[ 7], 5.00th=[ 9], 10.00th=[ 10], 20.00th=[ 12], 00:12:28.926 | 30.00th=[ 14], 40.00th=[ 15], 50.00th=[ 17], 60.00th=[ 19], 00:12:28.926 | 70.00th=[ 22], 80.00th=[ 23], 90.00th=[ 27], 95.00th=[ 35], 00:12:28.926 | 99.00th=[ 88], 99.50th=[ 101], 99.90th=[ 112], 99.95th=[ 112], 00:12:28.926 | 99.99th=[ 112] 00:12:28.926 write: IOPS=89, BW=11.2MiB/s (11.7MB/s)(95.6MiB/8548msec); 0 zone resets 00:12:28.926 slat (usec): min=40, max=2479, avg=143.09, stdev=190.21 00:12:28.926 clat (msec): min=8, max=280, avg=88.47, stdev=34.82 00:12:28.926 lat (msec): min=8, max=282, avg=88.61, stdev=34.85 00:12:28.926 clat percentiles (msec): 00:12:28.926 | 1.00th=[ 14], 5.00th=[ 58], 10.00th=[ 59], 20.00th=[ 65], 00:12:28.926 | 30.00th=[ 69], 40.00th=[ 74], 50.00th=[ 79], 60.00th=[ 87], 00:12:28.926 | 70.00th=[ 95], 80.00th=[ 110], 90.00th=[ 134], 95.00th=[ 159], 00:12:28.926 | 99.00th=[ 207], 99.50th=[ 249], 99.90th=[ 279], 99.95th=[ 279], 00:12:28.926 | 99.99th=[ 279] 00:12:28.926 bw ( KiB/s): min= 1792, max=17920, per=1.16%, avg=9699.70, stdev=4501.89, samples=20 00:12:28.926 iops : min= 14, max= 140, avg=75.70, stdev=35.11, samples=20 00:12:28.926 lat (msec) : 10=5.27%, 20=25.20%, 50=15.52%, 100=39.93%, 250=13.88% 00:12:28.926 lat (msec) : 500=0.21% 00:12:28.926 cpu : usr=0.66%, sys=0.24%, ctx=2322, majf=0, minf=3 00:12:28.926 IO depths : 1=0.6%, 2=1.3%, 4=2.6%, 8=95.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:28.926 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:28.926 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:28.926 issued rwts: total=640,765,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:28.926 latency : target=0, window=0, percentile=100.00%, depth=8 00:12:28.926 job22: (groupid=0, jobs=1): err= 0: pid=70968: Tue Jul 23 02:07:37 2024 00:12:28.926 read: IOPS=79, BW=9.97MiB/s (10.5MB/s)(80.0MiB/8023msec) 00:12:28.926 slat (usec): min=6, max=716, avg=41.07, stdev=64.34 00:12:28.926 clat (msec): min=4, max=182, avg=21.66, stdev=24.60 00:12:28.926 lat (msec): min=4, max=182, avg=21.70, stdev=24.61 00:12:28.926 clat percentiles (msec): 00:12:28.926 | 1.00th=[ 6], 5.00th=[ 7], 10.00th=[ 9], 20.00th=[ 10], 00:12:28.926 | 30.00th=[ 11], 40.00th=[ 13], 50.00th=[ 15], 60.00th=[ 16], 00:12:28.926 | 70.00th=[ 19], 80.00th=[ 23], 90.00th=[ 43], 95.00th=[ 74], 00:12:28.926 | 99.00th=[ 140], 99.50th=[ 178], 99.90th=[ 182], 99.95th=[ 182], 00:12:28.926 | 99.99th=[ 182] 00:12:28.926 write: IOPS=78, BW=9991KiB/s (10.2MB/s)(81.0MiB/8302msec); 0 zone resets 00:12:28.926 slat (usec): min=33, max=1811, avg=144.18, stdev=180.15 00:12:28.926 clat (msec): min=55, max=228, avg=101.81, stdev=35.52 00:12:28.926 lat (msec): min=55, max=228, avg=101.96, stdev=35.52 00:12:28.926 clat percentiles (msec): 00:12:28.926 | 1.00th=[ 58], 5.00th=[ 61], 10.00th=[ 64], 20.00th=[ 70], 00:12:28.926 | 30.00th=[ 75], 40.00th=[ 82], 50.00th=[ 93], 60.00th=[ 109], 00:12:28.926 | 70.00th=[ 118], 80.00th=[ 132], 90.00th=[ 157], 95.00th=[ 171], 00:12:28.926 | 99.00th=[ 194], 99.50th=[ 222], 99.90th=[ 228], 99.95th=[ 228], 00:12:28.926 | 99.99th=[ 228] 00:12:28.926 bw ( KiB/s): min= 1792, max=14848, per=0.98%, avg=8187.95, stdev=3914.64, samples=20 00:12:28.926 iops : min= 14, max= 116, avg=63.70, stdev=30.64, samples=20 00:12:28.926 lat (msec) : 10=10.56%, 20=26.63%, 50=8.23%, 100=30.59%, 250=23.99% 00:12:28.926 cpu : usr=0.49%, sys=0.31%, ctx=2093, majf=0, minf=3 00:12:28.926 IO depths : 1=0.7%, 2=1.4%, 4=2.8%, 8=95.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:28.926 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:28.926 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:28.926 issued rwts: total=640,648,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:28.926 latency : target=0, window=0, percentile=100.00%, depth=8 00:12:28.926 job23: (groupid=0, jobs=1): err= 0: pid=70969: Tue Jul 23 02:07:37 2024 00:12:28.926 read: IOPS=82, BW=10.3MiB/s (10.8MB/s)(80.0MiB/7732msec) 00:12:28.926 slat (usec): min=7, max=2395, avg=62.17, stdev=156.43 00:12:28.926 clat (usec): min=4417, max=75873, avg=15736.04, stdev=12564.32 00:12:28.926 lat (usec): min=4494, max=75885, avg=15798.20, stdev=12557.46 00:12:28.927 clat percentiles (usec): 00:12:28.927 | 1.00th=[ 4883], 5.00th=[ 5669], 10.00th=[ 6194], 20.00th=[ 7373], 00:12:28.927 | 30.00th=[ 9765], 40.00th=[10683], 50.00th=[11994], 60.00th=[13173], 00:12:28.927 | 70.00th=[16057], 80.00th=[18482], 90.00th=[31327], 95.00th=[44303], 00:12:28.927 | 99.00th=[69731], 99.50th=[70779], 99.90th=[76022], 99.95th=[76022], 00:12:28.927 | 99.99th=[76022] 00:12:28.927 write: IOPS=77, BW=9888KiB/s (10.1MB/s)(81.1MiB/8401msec); 0 zone resets 00:12:28.927 slat (usec): min=40, max=2337, avg=142.09, stdev=194.71 00:12:28.927 clat (msec): min=56, max=262, avg=102.80, stdev=39.87 00:12:28.927 lat (msec): min=56, max=262, avg=102.94, stdev=39.86 00:12:28.927 clat percentiles (msec): 00:12:28.927 | 1.00th=[ 58], 5.00th=[ 59], 10.00th=[ 63], 20.00th=[ 68], 00:12:28.927 | 30.00th=[ 74], 40.00th=[ 82], 50.00th=[ 91], 60.00th=[ 104], 00:12:28.927 | 70.00th=[ 122], 80.00th=[ 136], 90.00th=[ 155], 95.00th=[ 178], 00:12:28.927 | 99.00th=[ 249], 99.50th=[ 251], 99.90th=[ 264], 99.95th=[ 264], 00:12:28.927 | 99.99th=[ 264] 00:12:28.927 bw ( KiB/s): min= 255, max=14848, per=0.98%, avg=8201.10, stdev=4186.24, samples=20 00:12:28.927 iops : min= 1, max= 116, avg=63.80, stdev=32.81, samples=20 00:12:28.927 lat (msec) : 10=15.83%, 20=25.14%, 50=6.67%, 100=30.57%, 250=21.49% 00:12:28.927 lat (msec) : 500=0.31% 00:12:28.927 cpu : usr=0.43%, sys=0.38%, ctx=2110, majf=0, minf=7 00:12:28.927 IO depths : 1=0.7%, 2=1.4%, 4=2.8%, 8=95.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:28.927 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:28.927 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:28.927 issued rwts: total=640,649,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:28.927 latency : target=0, window=0, percentile=100.00%, depth=8 00:12:28.927 job24: (groupid=0, jobs=1): err= 0: pid=70970: Tue Jul 23 02:07:37 2024 00:12:28.927 read: IOPS=78, BW=9.78MiB/s (10.3MB/s)(84.9MiB/8681msec) 00:12:28.927 slat (usec): min=6, max=989, avg=54.17, stdev=91.48 00:12:28.927 clat (usec): min=4641, max=45720, avg=16306.96, stdev=7020.97 00:12:28.927 lat (usec): min=4764, max=45784, avg=16361.13, stdev=7029.50 00:12:28.927 clat percentiles (usec): 00:12:28.927 | 1.00th=[ 5538], 5.00th=[ 7963], 10.00th=[ 9503], 20.00th=[10814], 00:12:28.927 | 30.00th=[11863], 40.00th=[13304], 50.00th=[15139], 60.00th=[16450], 00:12:28.927 | 70.00th=[18220], 80.00th=[21365], 90.00th=[23987], 95.00th=[31327], 00:12:28.927 | 99.00th=[43254], 99.50th=[44827], 99.90th=[45876], 99.95th=[45876], 00:12:28.927 | 99.99th=[45876] 00:12:28.927 write: IOPS=92, BW=11.6MiB/s (12.2MB/s)(100MiB/8606msec); 0 zone resets 00:12:28.927 slat (usec): min=38, max=1334, avg=132.25, stdev=155.44 00:12:28.927 clat (msec): min=38, max=234, avg=85.38, stdev=29.38 00:12:28.927 lat (msec): min=38, max=235, avg=85.51, stdev=29.39 00:12:28.927 clat percentiles (msec): 00:12:28.927 | 1.00th=[ 51], 5.00th=[ 58], 10.00th=[ 59], 20.00th=[ 63], 00:12:28.927 | 30.00th=[ 67], 40.00th=[ 71], 50.00th=[ 75], 60.00th=[ 81], 00:12:28.927 | 70.00th=[ 91], 80.00th=[ 108], 90.00th=[ 130], 95.00th=[ 142], 00:12:28.927 | 99.00th=[ 184], 99.50th=[ 211], 99.90th=[ 236], 99.95th=[ 236], 00:12:28.927 | 99.99th=[ 236] 00:12:28.927 bw ( KiB/s): min= 1792, max=16640, per=1.22%, avg=10171.21, stdev=4285.49, samples=19 00:12:28.927 iops : min= 14, max= 130, avg=79.37, stdev=33.56, samples=19 00:12:28.927 lat (msec) : 10=6.09%, 20=28.47%, 50=11.83%, 100=41.18%, 250=12.44% 00:12:28.927 cpu : usr=0.60%, sys=0.31%, ctx=2438, majf=0, minf=7 00:12:28.927 IO depths : 1=0.7%, 2=1.4%, 4=2.7%, 8=95.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:28.927 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:28.927 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:28.927 issued rwts: total=679,800,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:28.927 latency : target=0, window=0, percentile=100.00%, depth=8 00:12:28.927 job25: (groupid=0, jobs=1): err= 0: pid=70971: Tue Jul 23 02:07:37 2024 00:12:28.927 read: IOPS=73, BW=9468KiB/s (9696kB/s)(80.0MiB/8652msec) 00:12:28.927 slat (usec): min=7, max=1232, avg=58.93, stdev=122.92 00:12:28.927 clat (usec): min=7314, max=74218, avg=20689.37, stdev=10844.83 00:12:28.927 lat (usec): min=8547, max=74238, avg=20748.30, stdev=10840.69 00:12:28.927 clat percentiles (usec): 00:12:28.927 | 1.00th=[ 9241], 5.00th=[ 9896], 10.00th=[10814], 20.00th=[12387], 00:12:28.927 | 30.00th=[13304], 40.00th=[15139], 50.00th=[18482], 60.00th=[20841], 00:12:28.927 | 70.00th=[22414], 80.00th=[25035], 90.00th=[35914], 95.00th=[44827], 00:12:28.927 | 99.00th=[57934], 99.50th=[58983], 99.90th=[73925], 99.95th=[73925], 00:12:28.927 | 99.99th=[73925] 00:12:28.927 write: IOPS=93, BW=11.7MiB/s (12.3MB/s)(98.2MiB/8398msec); 0 zone resets 00:12:28.927 slat (usec): min=32, max=1536, avg=137.57, stdev=161.89 00:12:28.927 clat (msec): min=6, max=253, avg=84.62, stdev=32.62 00:12:28.927 lat (msec): min=6, max=253, avg=84.76, stdev=32.62 00:12:28.927 clat percentiles (msec): 00:12:28.927 | 1.00th=[ 13], 5.00th=[ 57], 10.00th=[ 59], 20.00th=[ 64], 00:12:28.927 | 30.00th=[ 68], 40.00th=[ 73], 50.00th=[ 78], 60.00th=[ 81], 00:12:28.927 | 70.00th=[ 89], 80.00th=[ 100], 90.00th=[ 124], 95.00th=[ 155], 00:12:28.927 | 99.00th=[ 207], 99.50th=[ 222], 99.90th=[ 253], 99.95th=[ 253], 00:12:28.927 | 99.99th=[ 253] 00:12:28.927 bw ( KiB/s): min= 1280, max=19456, per=1.25%, avg=10493.63, stdev=4679.79, samples=19 00:12:28.927 iops : min= 10, max= 152, avg=81.89, stdev=36.55, samples=19 00:12:28.927 lat (msec) : 10=3.23%, 20=22.44%, 50=19.50%, 100=44.32%, 250=10.45% 00:12:28.927 lat (msec) : 500=0.07% 00:12:28.927 cpu : usr=0.61%, sys=0.34%, ctx=2275, majf=0, minf=3 00:12:28.927 IO depths : 1=0.6%, 2=1.3%, 4=2.5%, 8=95.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:28.927 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:28.927 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:28.927 issued rwts: total=640,786,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:28.927 latency : target=0, window=0, percentile=100.00%, depth=8 00:12:28.927 job26: (groupid=0, jobs=1): err= 0: pid=70972: Tue Jul 23 02:07:37 2024 00:12:28.927 read: IOPS=76, BW=9734KiB/s (9967kB/s)(80.0MiB/8416msec) 00:12:28.927 slat (usec): min=7, max=1689, avg=77.46, stdev=162.81 00:12:28.927 clat (usec): min=7717, max=44327, avg=17221.81, stdev=5501.64 00:12:28.927 lat (usec): min=7930, max=44341, avg=17299.28, stdev=5488.14 00:12:28.927 clat percentiles (usec): 00:12:28.927 | 1.00th=[ 9896], 5.00th=[11076], 10.00th=[11731], 20.00th=[12649], 00:12:28.927 | 30.00th=[13435], 40.00th=[14353], 50.00th=[15533], 60.00th=[17433], 00:12:28.927 | 70.00th=[19530], 80.00th=[21365], 90.00th=[23725], 95.00th=[27657], 00:12:28.927 | 99.00th=[35390], 99.50th=[39584], 99.90th=[44303], 99.95th=[44303], 00:12:28.927 | 99.99th=[44303] 00:12:28.927 write: IOPS=90, BW=11.3MiB/s (11.9MB/s)(98.4MiB/8681msec); 0 zone resets 00:12:28.927 slat (usec): min=40, max=2961, avg=138.35, stdev=214.66 00:12:28.927 clat (usec): min=1562, max=348897, avg=87266.10, stdev=42984.86 00:12:28.927 lat (usec): min=1632, max=348952, avg=87404.45, stdev=42993.68 00:12:28.927 clat percentiles (msec): 00:12:28.927 | 1.00th=[ 8], 5.00th=[ 56], 10.00th=[ 59], 20.00th=[ 63], 00:12:28.927 | 30.00th=[ 67], 40.00th=[ 70], 50.00th=[ 75], 60.00th=[ 81], 00:12:28.927 | 70.00th=[ 90], 80.00th=[ 107], 90.00th=[ 142], 95.00th=[ 169], 00:12:28.927 | 99.00th=[ 266], 99.50th=[ 309], 99.90th=[ 351], 99.95th=[ 351], 00:12:28.927 | 99.99th=[ 351] 00:12:28.927 bw ( KiB/s): min= 1792, max=21034, per=1.19%, avg=9982.85, stdev=5324.76, samples=20 00:12:28.927 iops : min= 14, max= 164, avg=77.80, stdev=41.66, samples=20 00:12:28.927 lat (msec) : 2=0.14%, 4=0.14%, 10=1.05%, 20=32.03%, 50=13.45% 00:12:28.927 lat (msec) : 100=40.71%, 250=11.77%, 500=0.70% 00:12:28.927 cpu : usr=0.67%, sys=0.26%, ctx=2238, majf=0, minf=1 00:12:28.927 IO depths : 1=0.6%, 2=1.3%, 4=2.5%, 8=95.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:28.927 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:28.927 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:28.927 issued rwts: total=640,787,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:28.927 latency : target=0, window=0, percentile=100.00%, depth=8 00:12:28.927 job27: (groupid=0, jobs=1): err= 0: pid=70973: Tue Jul 23 02:07:37 2024 00:12:28.927 read: IOPS=73, BW=9360KiB/s (9585kB/s)(80.0MiB/8752msec) 00:12:28.927 slat (usec): min=6, max=2210, avg=59.73, stdev=144.57 00:12:28.927 clat (msec): min=4, max=444, avg=22.23, stdev=47.81 00:12:28.927 lat (msec): min=4, max=445, avg=22.29, stdev=47.83 00:12:28.927 clat percentiles (msec): 00:12:28.927 | 1.00th=[ 6], 5.00th=[ 8], 10.00th=[ 9], 20.00th=[ 10], 00:12:28.927 | 30.00th=[ 12], 40.00th=[ 13], 50.00th=[ 14], 60.00th=[ 15], 00:12:28.927 | 70.00th=[ 18], 80.00th=[ 20], 90.00th=[ 27], 95.00th=[ 63], 00:12:28.927 | 99.00th=[ 405], 99.50th=[ 435], 99.90th=[ 447], 99.95th=[ 447], 00:12:28.927 | 99.99th=[ 447] 00:12:28.927 write: IOPS=81, BW=10.2MiB/s (10.7MB/s)(84.5MiB/8268msec); 0 zone resets 00:12:28.927 slat (usec): min=39, max=8051, avg=164.45, stdev=407.50 00:12:28.928 clat (msec): min=20, max=306, avg=96.46, stdev=40.30 00:12:28.928 lat (msec): min=20, max=307, avg=96.62, stdev=40.30 00:12:28.928 clat percentiles (msec): 00:12:28.928 | 1.00th=[ 27], 5.00th=[ 58], 10.00th=[ 60], 20.00th=[ 67], 00:12:28.928 | 30.00th=[ 73], 40.00th=[ 78], 50.00th=[ 83], 60.00th=[ 90], 00:12:28.928 | 70.00th=[ 108], 80.00th=[ 126], 90.00th=[ 150], 95.00th=[ 176], 00:12:28.928 | 99.00th=[ 241], 99.50th=[ 266], 99.90th=[ 309], 99.95th=[ 309], 00:12:28.928 | 99.99th=[ 309] 00:12:28.928 bw ( KiB/s): min= 1792, max=16896, per=1.08%, avg=8999.68, stdev=4327.21, samples=19 00:12:28.928 iops : min= 14, max= 132, avg=70.21, stdev=33.95, samples=19 00:12:28.928 lat (msec) : 10=9.88%, 20=29.56%, 50=6.84%, 100=36.02%, 250=16.64% 00:12:28.928 lat (msec) : 500=1.06% 00:12:28.928 cpu : usr=0.49%, sys=0.32%, ctx=2160, majf=0, minf=5 00:12:28.928 IO depths : 1=0.7%, 2=1.4%, 4=2.7%, 8=95.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:28.928 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:28.928 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:28.928 issued rwts: total=640,676,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:28.928 latency : target=0, window=0, percentile=100.00%, depth=8 00:12:28.928 job28: (groupid=0, jobs=1): err= 0: pid=70974: Tue Jul 23 02:07:37 2024 00:12:28.928 read: IOPS=73, BW=9465KiB/s (9692kB/s)(80.0MiB/8655msec) 00:12:28.928 slat (usec): min=6, max=5560, avg=77.36, stdev=273.57 00:12:28.928 clat (usec): min=8491, max=56700, avg=18059.88, stdev=6752.39 00:12:28.928 lat (usec): min=8574, max=56706, avg=18137.24, stdev=6749.01 00:12:28.928 clat percentiles (usec): 00:12:28.928 | 1.00th=[ 8848], 5.00th=[10028], 10.00th=[10945], 20.00th=[11863], 00:12:28.928 | 30.00th=[13173], 40.00th=[15401], 50.00th=[17171], 60.00th=[19530], 00:12:28.928 | 70.00th=[21365], 80.00th=[22676], 90.00th=[25822], 95.00th=[29492], 00:12:28.928 | 99.00th=[36439], 99.50th=[53216], 99.90th=[56886], 99.95th=[56886], 00:12:28.928 | 99.99th=[56886] 00:12:28.928 write: IOPS=92, BW=11.6MiB/s (12.2MB/s)(99.8MiB/8586msec); 0 zone resets 00:12:28.928 slat (usec): min=27, max=2575, avg=138.49, stdev=175.31 00:12:28.928 clat (msec): min=52, max=255, avg=85.42, stdev=31.64 00:12:28.928 lat (msec): min=52, max=255, avg=85.56, stdev=31.64 00:12:28.928 clat percentiles (msec): 00:12:28.928 | 1.00th=[ 56], 5.00th=[ 58], 10.00th=[ 60], 20.00th=[ 64], 00:12:28.928 | 30.00th=[ 68], 40.00th=[ 71], 50.00th=[ 75], 60.00th=[ 80], 00:12:28.928 | 70.00th=[ 88], 80.00th=[ 103], 90.00th=[ 127], 95.00th=[ 148], 00:12:28.928 | 99.00th=[ 213], 99.50th=[ 245], 99.90th=[ 255], 99.95th=[ 255], 00:12:28.928 | 99.99th=[ 255] 00:12:28.928 bw ( KiB/s): min= 1788, max=16896, per=1.27%, avg=10634.84, stdev=4463.73, samples=19 00:12:28.928 iops : min= 13, max= 132, avg=82.95, stdev=34.96, samples=19 00:12:28.928 lat (msec) : 10=2.16%, 20=25.73%, 50=16.34%, 100=44.02%, 250=11.61% 00:12:28.928 lat (msec) : 500=0.14% 00:12:28.928 cpu : usr=0.63%, sys=0.28%, ctx=2390, majf=0, minf=1 00:12:28.928 IO depths : 1=0.6%, 2=1.3%, 4=2.5%, 8=95.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:28.928 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:28.928 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:28.928 issued rwts: total=640,798,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:28.928 latency : target=0, window=0, percentile=100.00%, depth=8 00:12:28.928 job29: (groupid=0, jobs=1): err= 0: pid=70975: Tue Jul 23 02:07:37 2024 00:12:28.928 read: IOPS=73, BW=9378KiB/s (9603kB/s)(80.0MiB/8735msec) 00:12:28.928 slat (usec): min=6, max=1190, avg=51.80, stdev=97.67 00:12:28.928 clat (usec): min=6538, max=72732, avg=17449.07, stdev=9861.31 00:12:28.928 lat (usec): min=6636, max=72739, avg=17500.87, stdev=9859.94 00:12:28.928 clat percentiles (usec): 00:12:28.928 | 1.00th=[ 7832], 5.00th=[ 9110], 10.00th=[10159], 20.00th=[11207], 00:12:28.928 | 30.00th=[12256], 40.00th=[13566], 50.00th=[14877], 60.00th=[16581], 00:12:28.928 | 70.00th=[19006], 80.00th=[21103], 90.00th=[24511], 95.00th=[32113], 00:12:28.928 | 99.00th=[67634], 99.50th=[72877], 99.90th=[72877], 99.95th=[72877], 00:12:28.928 | 99.99th=[72877] 00:12:28.928 write: IOPS=92, BW=11.6MiB/s (12.2MB/s)(100MiB/8622msec); 0 zone resets 00:12:28.928 slat (usec): min=41, max=4674, avg=141.88, stdev=266.93 00:12:28.928 clat (msec): min=30, max=262, avg=83.71, stdev=31.13 00:12:28.928 lat (msec): min=30, max=262, avg=83.86, stdev=31.12 00:12:28.928 clat percentiles (msec): 00:12:28.928 | 1.00th=[ 35], 5.00th=[ 57], 10.00th=[ 59], 20.00th=[ 63], 00:12:28.928 | 30.00th=[ 66], 40.00th=[ 71], 50.00th=[ 75], 60.00th=[ 81], 00:12:28.928 | 70.00th=[ 88], 80.00th=[ 97], 90.00th=[ 123], 95.00th=[ 144], 00:12:28.928 | 99.00th=[ 218], 99.50th=[ 243], 99.90th=[ 264], 99.95th=[ 264], 00:12:28.928 | 99.99th=[ 264] 00:12:28.928 bw ( KiB/s): min= 513, max=16896, per=1.21%, avg=10148.50, stdev=4746.52, samples=20 00:12:28.928 iops : min= 4, max= 132, avg=79.20, stdev=37.13, samples=20 00:12:28.928 lat (msec) : 10=3.96%, 20=28.89%, 50=11.04%, 100=45.76%, 250=10.21% 00:12:28.928 lat (msec) : 500=0.14% 00:12:28.928 cpu : usr=0.58%, sys=0.33%, ctx=2324, majf=0, minf=1 00:12:28.928 IO depths : 1=0.6%, 2=1.2%, 4=2.5%, 8=95.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:28.928 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:28.928 complete : 0=0.0%, 4=99.4%, 8=0.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:28.928 issued rwts: total=640,800,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:28.928 latency : target=0, window=0, percentile=100.00%, depth=8 00:12:28.928 job30: (groupid=0, jobs=1): err= 0: pid=70976: Tue Jul 23 02:07:37 2024 00:12:28.928 read: IOPS=62, BW=7977KiB/s (8169kB/s)(60.0MiB/7702msec) 00:12:28.928 slat (usec): min=6, max=1092, avg=56.02, stdev=103.62 00:12:28.928 clat (msec): min=9, max=127, avg=25.20, stdev=13.80 00:12:28.928 lat (msec): min=9, max=127, avg=25.26, stdev=13.80 00:12:28.928 clat percentiles (msec): 00:12:28.928 | 1.00th=[ 11], 5.00th=[ 12], 10.00th=[ 14], 20.00th=[ 16], 00:12:28.928 | 30.00th=[ 18], 40.00th=[ 20], 50.00th=[ 22], 60.00th=[ 25], 00:12:28.928 | 70.00th=[ 28], 80.00th=[ 33], 90.00th=[ 40], 95.00th=[ 44], 00:12:28.928 | 99.00th=[ 96], 99.50th=[ 105], 99.90th=[ 128], 99.95th=[ 128], 00:12:28.928 | 99.99th=[ 128] 00:12:28.928 write: IOPS=60, BW=7691KiB/s (7875kB/s)(64.1MiB/8538msec); 0 zone resets 00:12:28.928 slat (usec): min=36, max=7767, avg=149.14, stdev=378.70 00:12:28.928 clat (msec): min=83, max=466, avg=131.48, stdev=64.65 00:12:28.928 lat (msec): min=83, max=466, avg=131.63, stdev=64.63 00:12:28.928 clat percentiles (msec): 00:12:28.928 | 1.00th=[ 85], 5.00th=[ 86], 10.00th=[ 88], 20.00th=[ 91], 00:12:28.928 | 30.00th=[ 94], 40.00th=[ 101], 50.00th=[ 107], 60.00th=[ 115], 00:12:28.928 | 70.00th=[ 130], 80.00th=[ 153], 90.00th=[ 226], 95.00th=[ 268], 00:12:28.928 | 99.00th=[ 405], 99.50th=[ 460], 99.90th=[ 468], 99.95th=[ 468], 00:12:28.928 | 99.99th=[ 468] 00:12:28.928 bw ( KiB/s): min= 1792, max=11520, per=0.86%, avg=7195.00, stdev=2966.53, samples=18 00:12:28.928 iops : min= 14, max= 90, avg=56.11, stdev=23.19, samples=18 00:12:28.928 lat (msec) : 10=0.20%, 20=20.24%, 50=26.18%, 100=22.36%, 250=27.69% 00:12:28.928 lat (msec) : 500=3.32% 00:12:28.928 cpu : usr=0.43%, sys=0.21%, ctx=1639, majf=0, minf=7 00:12:28.928 IO depths : 1=0.7%, 2=1.4%, 4=2.8%, 8=95.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:28.928 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:28.928 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:28.928 issued rwts: total=480,513,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:28.928 latency : target=0, window=0, percentile=100.00%, depth=8 00:12:28.928 job31: (groupid=0, jobs=1): err= 0: pid=70977: Tue Jul 23 02:07:37 2024 00:12:28.928 read: IOPS=46, BW=5997KiB/s (6141kB/s)(41.2MiB/7044msec) 00:12:28.928 slat (usec): min=7, max=3529, avg=92.57, stdev=261.46 00:12:28.928 clat (msec): min=7, max=232, avg=25.98, stdev=32.47 00:12:28.928 lat (msec): min=7, max=232, avg=26.07, stdev=32.46 00:12:28.928 clat percentiles (msec): 00:12:28.928 | 1.00th=[ 9], 5.00th=[ 10], 10.00th=[ 11], 20.00th=[ 13], 00:12:28.928 | 30.00th=[ 14], 40.00th=[ 17], 50.00th=[ 18], 60.00th=[ 20], 00:12:28.928 | 70.00th=[ 23], 80.00th=[ 25], 90.00th=[ 50], 95.00th=[ 64], 00:12:28.928 | 99.00th=[ 228], 99.50th=[ 230], 99.90th=[ 232], 99.95th=[ 232], 00:12:28.928 | 99.99th=[ 232] 00:12:28.928 write: IOPS=53, BW=6858KiB/s (7022kB/s)(60.0MiB/8959msec); 0 zone resets 00:12:28.928 slat (usec): min=38, max=14577, avg=177.51, stdev=679.93 00:12:28.928 clat (msec): min=2, max=412, avg=148.10, stdev=66.12 00:12:28.928 lat (msec): min=2, max=413, avg=148.28, stdev=66.11 00:12:28.928 clat percentiles (msec): 00:12:28.928 | 1.00th=[ 8], 5.00th=[ 86], 10.00th=[ 89], 20.00th=[ 96], 00:12:28.928 | 30.00th=[ 105], 40.00th=[ 115], 50.00th=[ 127], 60.00th=[ 146], 00:12:28.928 | 70.00th=[ 186], 80.00th=[ 207], 90.00th=[ 232], 95.00th=[ 262], 00:12:28.928 | 99.00th=[ 351], 99.50th=[ 380], 99.90th=[ 414], 99.95th=[ 414], 00:12:28.928 | 99.99th=[ 414] 00:12:28.928 bw ( KiB/s): min= 3584, max=11776, per=0.82%, avg=6823.94, stdev=2406.22, samples=18 00:12:28.928 iops : min= 28, max= 92, avg=53.17, stdev=18.73, samples=18 00:12:28.928 lat (msec) : 4=0.25%, 10=3.46%, 20=22.59%, 50=12.22%, 100=15.06% 00:12:28.928 lat (msec) : 250=42.96%, 500=3.46% 00:12:28.928 cpu : usr=0.39%, sys=0.17%, ctx=1375, majf=0, minf=1 00:12:28.928 IO depths : 1=0.7%, 2=1.5%, 4=3.0%, 8=94.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:28.928 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:28.928 complete : 0=0.0%, 4=99.2%, 8=0.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:28.928 issued rwts: total=330,480,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:28.928 latency : target=0, window=0, percentile=100.00%, depth=8 00:12:28.928 job32: (groupid=0, jobs=1): err= 0: pid=70978: Tue Jul 23 02:07:37 2024 00:12:28.928 read: IOPS=49, BW=6391KiB/s (6544kB/s)(40.0MiB/6409msec) 00:12:28.928 slat (usec): min=8, max=1100, avg=59.11, stdev=100.78 00:12:28.928 clat (msec): min=6, max=111, avg=24.64, stdev=14.92 00:12:28.928 lat (msec): min=6, max=112, avg=24.70, stdev=14.94 00:12:28.928 clat percentiles (msec): 00:12:28.929 | 1.00th=[ 8], 5.00th=[ 13], 10.00th=[ 14], 20.00th=[ 17], 00:12:28.929 | 30.00th=[ 18], 40.00th=[ 20], 50.00th=[ 23], 60.00th=[ 24], 00:12:28.929 | 70.00th=[ 27], 80.00th=[ 30], 90.00th=[ 35], 95.00th=[ 41], 00:12:28.929 | 99.00th=[ 106], 99.50th=[ 110], 99.90th=[ 112], 99.95th=[ 112], 00:12:28.929 | 99.99th=[ 112] 00:12:28.929 write: IOPS=45, BW=5768KiB/s (5906kB/s)(51.1MiB/9077msec); 0 zone resets 00:12:28.929 slat (usec): min=47, max=3707, avg=175.22, stdev=273.48 00:12:28.929 clat (msec): min=76, max=647, avg=176.71, stdev=91.57 00:12:28.929 lat (msec): min=76, max=647, avg=176.88, stdev=91.55 00:12:28.929 clat percentiles (msec): 00:12:28.929 | 1.00th=[ 91], 5.00th=[ 94], 10.00th=[ 100], 20.00th=[ 113], 00:12:28.929 | 30.00th=[ 124], 40.00th=[ 136], 50.00th=[ 157], 60.00th=[ 182], 00:12:28.929 | 70.00th=[ 199], 80.00th=[ 218], 90.00th=[ 247], 95.00th=[ 284], 00:12:28.929 | 99.00th=[ 600], 99.50th=[ 634], 99.90th=[ 651], 99.95th=[ 651], 00:12:28.929 | 99.99th=[ 651] 00:12:28.929 bw ( KiB/s): min= 768, max= 9216, per=0.65%, avg=5410.68, stdev=2499.54, samples=19 00:12:28.929 iops : min= 6, max= 72, avg=42.16, stdev=19.51, samples=19 00:12:28.929 lat (msec) : 10=1.10%, 20=16.87%, 50=24.83%, 100=6.17%, 250=45.54% 00:12:28.929 lat (msec) : 500=3.84%, 750=1.65% 00:12:28.929 cpu : usr=0.41%, sys=0.10%, ctx=1263, majf=0, minf=5 00:12:28.929 IO depths : 1=0.7%, 2=1.4%, 4=2.7%, 8=95.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:28.929 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:28.929 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:28.929 issued rwts: total=320,409,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:28.929 latency : target=0, window=0, percentile=100.00%, depth=8 00:12:28.929 job33: (groupid=0, jobs=1): err= 0: pid=70979: Tue Jul 23 02:07:37 2024 00:12:28.929 read: IOPS=56, BW=7188KiB/s (7361kB/s)(53.6MiB/7639msec) 00:12:28.929 slat (usec): min=6, max=1087, avg=63.12, stdev=111.87 00:12:28.929 clat (msec): min=6, max=193, avg=25.05, stdev=26.52 00:12:28.929 lat (msec): min=7, max=193, avg=25.12, stdev=26.52 00:12:28.929 clat percentiles (msec): 00:12:28.929 | 1.00th=[ 8], 5.00th=[ 8], 10.00th=[ 9], 20.00th=[ 11], 00:12:28.929 | 30.00th=[ 12], 40.00th=[ 14], 50.00th=[ 16], 60.00th=[ 20], 00:12:28.929 | 70.00th=[ 32], 80.00th=[ 36], 90.00th=[ 45], 95.00th=[ 51], 00:12:28.929 | 99.00th=[ 186], 99.50th=[ 190], 99.90th=[ 194], 99.95th=[ 194], 00:12:28.929 | 99.99th=[ 194] 00:12:28.929 write: IOPS=55, BW=7096KiB/s (7267kB/s)(60.0MiB/8658msec); 0 zone resets 00:12:28.929 slat (usec): min=44, max=2791, avg=149.31, stdev=203.62 00:12:28.929 clat (msec): min=45, max=694, avg=143.17, stdev=83.78 00:12:28.929 lat (msec): min=45, max=694, avg=143.31, stdev=83.79 00:12:28.929 clat percentiles (msec): 00:12:28.929 | 1.00th=[ 51], 5.00th=[ 86], 10.00th=[ 88], 20.00th=[ 90], 00:12:28.929 | 30.00th=[ 94], 40.00th=[ 102], 50.00th=[ 115], 60.00th=[ 128], 00:12:28.929 | 70.00th=[ 153], 80.00th=[ 190], 90.00th=[ 230], 95.00th=[ 259], 00:12:28.929 | 99.00th=[ 592], 99.50th=[ 676], 99.90th=[ 693], 99.95th=[ 693], 00:12:28.929 | 99.99th=[ 693] 00:12:28.929 bw ( KiB/s): min= 768, max=10496, per=0.82%, avg=6825.28, stdev=3043.61, samples=18 00:12:28.929 iops : min= 6, max= 82, avg=53.22, stdev=23.77, samples=18 00:12:28.929 lat (msec) : 10=8.25%, 20=20.24%, 50=16.50%, 100=21.56%, 250=30.25% 00:12:28.929 lat (msec) : 500=2.53%, 750=0.66% 00:12:28.929 cpu : usr=0.42%, sys=0.17%, ctx=1550, majf=0, minf=5 00:12:28.929 IO depths : 1=0.7%, 2=1.3%, 4=2.6%, 8=95.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:28.929 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:28.929 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:28.929 issued rwts: total=429,480,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:28.929 latency : target=0, window=0, percentile=100.00%, depth=8 00:12:28.929 job34: (groupid=0, jobs=1): err= 0: pid=70980: Tue Jul 23 02:07:37 2024 00:12:28.929 read: IOPS=61, BW=7867KiB/s (8056kB/s)(60.0MiB/7810msec) 00:12:28.929 slat (usec): min=7, max=1060, avg=65.23, stdev=117.37 00:12:28.929 clat (msec): min=14, max=119, avg=32.79, stdev=15.74 00:12:28.929 lat (msec): min=15, max=119, avg=32.86, stdev=15.75 00:12:28.929 clat percentiles (msec): 00:12:28.929 | 1.00th=[ 17], 5.00th=[ 19], 10.00th=[ 20], 20.00th=[ 21], 00:12:28.929 | 30.00th=[ 24], 40.00th=[ 26], 50.00th=[ 29], 60.00th=[ 32], 00:12:28.929 | 70.00th=[ 35], 80.00th=[ 42], 90.00th=[ 52], 95.00th=[ 64], 00:12:28.929 | 99.00th=[ 100], 99.50th=[ 101], 99.90th=[ 121], 99.95th=[ 121], 00:12:28.929 | 99.99th=[ 121] 00:12:28.929 write: IOPS=63, BW=8098KiB/s (8292kB/s)(64.0MiB/8093msec); 0 zone resets 00:12:28.929 slat (usec): min=38, max=12442, avg=169.96, stdev=569.40 00:12:28.929 clat (msec): min=74, max=449, avg=124.62, stdev=56.80 00:12:28.929 lat (msec): min=75, max=449, avg=124.79, stdev=56.77 00:12:28.929 clat percentiles (msec): 00:12:28.929 | 1.00th=[ 81], 5.00th=[ 85], 10.00th=[ 86], 20.00th=[ 89], 00:12:28.929 | 30.00th=[ 94], 40.00th=[ 99], 50.00th=[ 106], 60.00th=[ 112], 00:12:28.929 | 70.00th=[ 126], 80.00th=[ 146], 90.00th=[ 182], 95.00th=[ 247], 00:12:28.929 | 99.00th=[ 380], 99.50th=[ 426], 99.90th=[ 451], 99.95th=[ 451], 00:12:28.929 | 99.99th=[ 451] 00:12:28.929 bw ( KiB/s): min= 1792, max=12032, per=0.91%, avg=7590.82, stdev=2940.76, samples=17 00:12:28.929 iops : min= 14, max= 94, avg=59.29, stdev=22.97, samples=17 00:12:28.929 lat (msec) : 20=8.06%, 50=35.08%, 100=26.92%, 250=27.62%, 500=2.32% 00:12:28.929 cpu : usr=0.47%, sys=0.20%, ctx=1690, majf=0, minf=1 00:12:28.929 IO depths : 1=0.7%, 2=1.4%, 4=2.8%, 8=95.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:28.929 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:28.929 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:28.929 issued rwts: total=480,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:28.929 latency : target=0, window=0, percentile=100.00%, depth=8 00:12:28.929 job35: (groupid=0, jobs=1): err= 0: pid=70981: Tue Jul 23 02:07:37 2024 00:12:28.929 read: IOPS=61, BW=7823KiB/s (8011kB/s)(60.0MiB/7854msec) 00:12:28.929 slat (usec): min=7, max=1361, avg=68.91, stdev=129.81 00:12:28.929 clat (msec): min=9, max=204, avg=32.38, stdev=25.34 00:12:28.929 lat (msec): min=9, max=204, avg=32.45, stdev=25.33 00:12:28.929 clat percentiles (msec): 00:12:28.929 | 1.00th=[ 11], 5.00th=[ 12], 10.00th=[ 13], 20.00th=[ 18], 00:12:28.929 | 30.00th=[ 22], 40.00th=[ 26], 50.00th=[ 28], 60.00th=[ 32], 00:12:28.929 | 70.00th=[ 35], 80.00th=[ 40], 90.00th=[ 47], 95.00th=[ 57], 00:12:28.929 | 99.00th=[ 169], 99.50th=[ 192], 99.90th=[ 205], 99.95th=[ 205], 00:12:28.929 | 99.99th=[ 205] 00:12:28.929 write: IOPS=64, BW=8243KiB/s (8441kB/s)(65.4MiB/8121msec); 0 zone resets 00:12:28.929 slat (usec): min=38, max=5324, avg=168.42, stdev=337.72 00:12:28.929 clat (msec): min=54, max=388, avg=122.79, stdev=53.41 00:12:28.929 lat (msec): min=54, max=388, avg=122.96, stdev=53.41 00:12:28.929 clat percentiles (msec): 00:12:28.929 | 1.00th=[ 62], 5.00th=[ 86], 10.00th=[ 87], 20.00th=[ 90], 00:12:28.929 | 30.00th=[ 95], 40.00th=[ 100], 50.00th=[ 105], 60.00th=[ 112], 00:12:28.929 | 70.00th=[ 123], 80.00th=[ 140], 90.00th=[ 180], 95.00th=[ 234], 00:12:28.929 | 99.00th=[ 368], 99.50th=[ 388], 99.90th=[ 388], 99.95th=[ 388], 00:12:28.929 | 99.99th=[ 388] 00:12:28.929 bw ( KiB/s): min= 1792, max=11264, per=0.88%, avg=7323.94, stdev=3197.52, samples=18 00:12:28.929 iops : min= 14, max= 88, avg=57.11, stdev=25.00, samples=18 00:12:28.929 lat (msec) : 10=0.10%, 20=13.06%, 50=31.01%, 100=23.53%, 250=30.21% 00:12:28.929 lat (msec) : 500=2.09% 00:12:28.929 cpu : usr=0.50%, sys=0.18%, ctx=1680, majf=0, minf=3 00:12:28.929 IO depths : 1=0.7%, 2=1.4%, 4=2.8%, 8=95.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:28.929 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:28.929 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:28.929 issued rwts: total=480,523,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:28.929 latency : target=0, window=0, percentile=100.00%, depth=8 00:12:28.929 job36: (groupid=0, jobs=1): err= 0: pid=70982: Tue Jul 23 02:07:37 2024 00:12:28.929 read: IOPS=61, BW=7828KiB/s (8016kB/s)(60.0MiB/7849msec) 00:12:28.929 slat (usec): min=7, max=996, avg=74.83, stdev=141.59 00:12:28.929 clat (usec): min=12006, max=77156, avg=30208.40, stdev=10995.26 00:12:28.929 lat (usec): min=12025, max=77169, avg=30283.23, stdev=10988.59 00:12:28.929 clat percentiles (usec): 00:12:28.929 | 1.00th=[13566], 5.00th=[16450], 10.00th=[18744], 20.00th=[20579], 00:12:28.929 | 30.00th=[22676], 40.00th=[26084], 50.00th=[28705], 60.00th=[30802], 00:12:28.929 | 70.00th=[34341], 80.00th=[38011], 90.00th=[45876], 95.00th=[50594], 00:12:28.929 | 99.00th=[67634], 99.50th=[72877], 99.90th=[77071], 99.95th=[77071], 00:12:28.929 | 99.99th=[77071] 00:12:28.929 write: IOPS=63, BW=8167KiB/s (8363kB/s)(65.9MiB/8260msec); 0 zone resets 00:12:28.929 slat (usec): min=32, max=2031, avg=140.13, stdev=176.68 00:12:28.929 clat (msec): min=24, max=451, avg=123.76, stdev=59.00 00:12:28.929 lat (msec): min=24, max=451, avg=123.90, stdev=59.01 00:12:28.929 clat percentiles (msec): 00:12:28.929 | 1.00th=[ 32], 5.00th=[ 86], 10.00th=[ 87], 20.00th=[ 90], 00:12:28.929 | 30.00th=[ 93], 40.00th=[ 97], 50.00th=[ 104], 60.00th=[ 113], 00:12:28.929 | 70.00th=[ 125], 80.00th=[ 148], 90.00th=[ 178], 95.00th=[ 224], 00:12:28.929 | 99.00th=[ 393], 99.50th=[ 409], 99.90th=[ 451], 99.95th=[ 451], 00:12:28.929 | 99.99th=[ 451] 00:12:28.929 bw ( KiB/s): min= 1277, max=11520, per=0.84%, avg=6992.47, stdev=3408.67, samples=19 00:12:28.929 iops : min= 9, max= 90, avg=54.53, stdev=26.80, samples=19 00:12:28.929 lat (msec) : 20=7.45%, 50=38.33%, 100=25.62%, 250=26.32%, 500=2.28% 00:12:28.929 cpu : usr=0.47%, sys=0.19%, ctx=1695, majf=0, minf=5 00:12:28.929 IO depths : 1=0.7%, 2=1.4%, 4=2.8%, 8=95.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:28.929 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:28.929 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:28.929 issued rwts: total=480,527,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:28.929 latency : target=0, window=0, percentile=100.00%, depth=8 00:12:28.929 job37: (groupid=0, jobs=1): err= 0: pid=70983: Tue Jul 23 02:07:37 2024 00:12:28.930 read: IOPS=59, BW=7582KiB/s (7764kB/s)(60.0MiB/8103msec) 00:12:28.930 slat (usec): min=6, max=1189, avg=60.65, stdev=117.84 00:12:28.930 clat (usec): min=13439, max=66628, avg=24865.79, stdev=7022.56 00:12:28.930 lat (usec): min=13510, max=66665, avg=24926.43, stdev=7025.91 00:12:28.930 clat percentiles (usec): 00:12:28.930 | 1.00th=[14353], 5.00th=[17433], 10.00th=[17957], 20.00th=[19006], 00:12:28.930 | 30.00th=[20317], 40.00th=[21627], 50.00th=[23200], 60.00th=[24249], 00:12:28.930 | 70.00th=[27919], 80.00th=[30278], 90.00th=[33424], 95.00th=[38011], 00:12:28.930 | 99.00th=[46924], 99.50th=[52691], 99.90th=[66847], 99.95th=[66847], 00:12:28.930 | 99.99th=[66847] 00:12:28.930 write: IOPS=63, BW=8098KiB/s (8292kB/s)(67.8MiB/8567msec); 0 zone resets 00:12:28.930 slat (usec): min=33, max=2625, avg=141.52, stdev=198.33 00:12:28.930 clat (msec): min=64, max=485, avg=125.13, stdev=56.72 00:12:28.930 lat (msec): min=64, max=485, avg=125.27, stdev=56.74 00:12:28.930 clat percentiles (msec): 00:12:28.930 | 1.00th=[ 71], 5.00th=[ 86], 10.00th=[ 88], 20.00th=[ 92], 00:12:28.930 | 30.00th=[ 96], 40.00th=[ 104], 50.00th=[ 109], 60.00th=[ 114], 00:12:28.930 | 70.00th=[ 124], 80.00th=[ 136], 90.00th=[ 186], 95.00th=[ 251], 00:12:28.930 | 99.00th=[ 368], 99.50th=[ 430], 99.90th=[ 485], 99.95th=[ 485], 00:12:28.930 | 99.99th=[ 485] 00:12:28.930 bw ( KiB/s): min= 1792, max=10773, per=0.91%, avg=7591.94, stdev=2940.95, samples=18 00:12:28.930 iops : min= 14, max= 84, avg=59.22, stdev=22.96, samples=18 00:12:28.930 lat (msec) : 20=13.11%, 50=33.46%, 100=18.79%, 250=31.90%, 500=2.74% 00:12:28.930 cpu : usr=0.50%, sys=0.19%, ctx=1683, majf=0, minf=7 00:12:28.930 IO depths : 1=0.7%, 2=1.4%, 4=2.7%, 8=95.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:28.930 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:28.930 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:28.930 issued rwts: total=480,542,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:28.930 latency : target=0, window=0, percentile=100.00%, depth=8 00:12:28.930 job38: (groupid=0, jobs=1): err= 0: pid=70984: Tue Jul 23 02:07:37 2024 00:12:28.930 read: IOPS=59, BW=7611KiB/s (7794kB/s)(60.0MiB/8072msec) 00:12:28.930 slat (usec): min=6, max=2954, avg=64.25, stdev=170.96 00:12:28.930 clat (msec): min=8, max=209, avg=27.65, stdev=24.14 00:12:28.930 lat (msec): min=8, max=209, avg=27.72, stdev=24.18 00:12:28.930 clat percentiles (msec): 00:12:28.930 | 1.00th=[ 14], 5.00th=[ 15], 10.00th=[ 17], 20.00th=[ 19], 00:12:28.930 | 30.00th=[ 19], 40.00th=[ 20], 50.00th=[ 22], 60.00th=[ 24], 00:12:28.930 | 70.00th=[ 29], 80.00th=[ 32], 90.00th=[ 39], 95.00th=[ 48], 00:12:28.930 | 99.00th=[ 201], 99.50th=[ 205], 99.90th=[ 209], 99.95th=[ 209], 00:12:28.930 | 99.99th=[ 209] 00:12:28.930 write: IOPS=63, BW=8165KiB/s (8361kB/s)(67.0MiB/8403msec); 0 zone resets 00:12:28.930 slat (usec): min=32, max=3392, avg=151.38, stdev=250.36 00:12:28.930 clat (msec): min=34, max=412, avg=124.19, stdev=56.20 00:12:28.930 lat (msec): min=35, max=412, avg=124.35, stdev=56.19 00:12:28.930 clat percentiles (msec): 00:12:28.930 | 1.00th=[ 42], 5.00th=[ 86], 10.00th=[ 87], 20.00th=[ 90], 00:12:28.930 | 30.00th=[ 93], 40.00th=[ 101], 50.00th=[ 107], 60.00th=[ 115], 00:12:28.930 | 70.00th=[ 126], 80.00th=[ 146], 90.00th=[ 182], 95.00th=[ 243], 00:12:28.930 | 99.00th=[ 393], 99.50th=[ 405], 99.90th=[ 414], 99.95th=[ 414], 00:12:28.930 | 99.99th=[ 414] 00:12:28.930 bw ( KiB/s): min= 512, max=11520, per=0.81%, avg=6768.95, stdev=3632.67, samples=20 00:12:28.930 iops : min= 4, max= 90, avg=52.75, stdev=28.44, samples=20 00:12:28.930 lat (msec) : 10=0.10%, 20=20.28%, 50=25.30%, 100=22.64%, 250=29.43% 00:12:28.930 lat (msec) : 500=2.26% 00:12:28.930 cpu : usr=0.48%, sys=0.20%, ctx=1647, majf=0, minf=3 00:12:28.930 IO depths : 1=0.7%, 2=1.4%, 4=2.8%, 8=95.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:28.930 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:28.930 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:28.930 issued rwts: total=480,536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:28.930 latency : target=0, window=0, percentile=100.00%, depth=8 00:12:28.930 job39: (groupid=0, jobs=1): err= 0: pid=70985: Tue Jul 23 02:07:37 2024 00:12:28.930 read: IOPS=57, BW=7410KiB/s (7587kB/s)(60.0MiB/8292msec) 00:12:28.930 slat (usec): min=7, max=2642, avg=65.09, stdev=156.26 00:12:28.930 clat (msec): min=4, max=131, avg=21.41, stdev=16.69 00:12:28.930 lat (msec): min=5, max=131, avg=21.47, stdev=16.68 00:12:28.930 clat percentiles (msec): 00:12:28.930 | 1.00th=[ 8], 5.00th=[ 11], 10.00th=[ 12], 20.00th=[ 14], 00:12:28.930 | 30.00th=[ 15], 40.00th=[ 16], 50.00th=[ 17], 60.00th=[ 19], 00:12:28.930 | 70.00th=[ 22], 80.00th=[ 24], 90.00th=[ 32], 95.00th=[ 50], 00:12:28.930 | 99.00th=[ 117], 99.50th=[ 125], 99.90th=[ 132], 99.95th=[ 132], 00:12:28.930 | 99.99th=[ 132] 00:12:28.930 write: IOPS=63, BW=8158KiB/s (8353kB/s)(70.0MiB/8787msec); 0 zone resets 00:12:28.930 slat (usec): min=33, max=1529, avg=146.41, stdev=176.38 00:12:28.930 clat (msec): min=4, max=347, avg=124.72, stdev=48.99 00:12:28.930 lat (msec): min=4, max=347, avg=124.86, stdev=49.00 00:12:28.930 clat percentiles (msec): 00:12:28.930 | 1.00th=[ 16], 5.00th=[ 86], 10.00th=[ 86], 20.00th=[ 89], 00:12:28.930 | 30.00th=[ 96], 40.00th=[ 104], 50.00th=[ 109], 60.00th=[ 118], 00:12:28.930 | 70.00th=[ 132], 80.00th=[ 159], 90.00th=[ 207], 95.00th=[ 224], 00:12:28.930 | 99.00th=[ 257], 99.50th=[ 266], 99.90th=[ 347], 99.95th=[ 347], 00:12:28.930 | 99.99th=[ 347] 00:12:28.930 bw ( KiB/s): min= 1024, max=13568, per=0.89%, avg=7448.26, stdev=3140.05, samples=19 00:12:28.930 iops : min= 8, max= 106, avg=58.05, stdev=24.49, samples=19 00:12:28.930 lat (msec) : 10=1.73%, 20=29.33%, 50=14.33%, 100=19.71%, 250=34.04% 00:12:28.930 lat (msec) : 500=0.87% 00:12:28.930 cpu : usr=0.44%, sys=0.23%, ctx=1737, majf=0, minf=5 00:12:28.930 IO depths : 1=0.7%, 2=1.3%, 4=2.7%, 8=95.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:28.930 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:28.930 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:28.930 issued rwts: total=480,560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:28.930 latency : target=0, window=0, percentile=100.00%, depth=8 00:12:28.930 job40: (groupid=0, jobs=1): err= 0: pid=70986: Tue Jul 23 02:07:37 2024 00:12:28.930 read: IOPS=66, BW=8559KiB/s (8765kB/s)(60.0MiB/7178msec) 00:12:28.930 slat (usec): min=7, max=1604, avg=72.82, stdev=160.36 00:12:28.930 clat (msec): min=5, max=142, avg=19.73, stdev=21.95 00:12:28.930 lat (msec): min=5, max=142, avg=19.81, stdev=21.95 00:12:28.930 clat percentiles (msec): 00:12:28.930 | 1.00th=[ 7], 5.00th=[ 7], 10.00th=[ 8], 20.00th=[ 10], 00:12:28.930 | 30.00th=[ 11], 40.00th=[ 12], 50.00th=[ 13], 60.00th=[ 16], 00:12:28.930 | 70.00th=[ 18], 80.00th=[ 23], 90.00th=[ 32], 95.00th=[ 52], 00:12:28.930 | 99.00th=[ 129], 99.50th=[ 136], 99.90th=[ 144], 99.95th=[ 144], 00:12:28.930 | 99.99th=[ 144] 00:12:28.930 write: IOPS=55, BW=7068KiB/s (7237kB/s)(61.2MiB/8874msec); 0 zone resets 00:12:28.930 slat (usec): min=38, max=1143, avg=137.56, stdev=139.70 00:12:28.930 clat (msec): min=46, max=398, avg=143.70, stdev=66.76 00:12:28.930 lat (msec): min=46, max=398, avg=143.83, stdev=66.76 00:12:28.930 clat percentiles (msec): 00:12:28.930 | 1.00th=[ 52], 5.00th=[ 85], 10.00th=[ 86], 20.00th=[ 90], 00:12:28.930 | 30.00th=[ 99], 40.00th=[ 108], 50.00th=[ 121], 60.00th=[ 138], 00:12:28.930 | 70.00th=[ 163], 80.00th=[ 188], 90.00th=[ 249], 95.00th=[ 296], 00:12:28.930 | 99.00th=[ 355], 99.50th=[ 380], 99.90th=[ 401], 99.95th=[ 401], 00:12:28.930 | 99.99th=[ 401] 00:12:28.930 bw ( KiB/s): min= 495, max=11776, per=0.74%, avg=6167.65, stdev=3303.45, samples=20 00:12:28.930 iops : min= 3, max= 92, avg=48.05, stdev=25.88, samples=20 00:12:28.930 lat (msec) : 10=14.54%, 20=24.02%, 50=8.56%, 100=17.01%, 250=31.03% 00:12:28.930 lat (msec) : 500=4.85% 00:12:28.930 cpu : usr=0.42%, sys=0.21%, ctx=1623, majf=0, minf=5 00:12:28.930 IO depths : 1=0.7%, 2=1.4%, 4=2.9%, 8=94.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:28.930 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:28.930 complete : 0=0.0%, 4=99.2%, 8=0.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:28.930 issued rwts: total=480,490,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:28.930 latency : target=0, window=0, percentile=100.00%, depth=8 00:12:28.930 job41: (groupid=0, jobs=1): err= 0: pid=70987: Tue Jul 23 02:07:37 2024 00:12:28.930 read: IOPS=61, BW=7839KiB/s (8027kB/s)(60.0MiB/7838msec) 00:12:28.930 slat (usec): min=7, max=1864, avg=75.58, stdev=155.26 00:12:28.930 clat (msec): min=7, max=138, avg=24.18, stdev=22.10 00:12:28.930 lat (msec): min=7, max=138, avg=24.26, stdev=22.11 00:12:28.930 clat percentiles (msec): 00:12:28.930 | 1.00th=[ 8], 5.00th=[ 9], 10.00th=[ 11], 20.00th=[ 13], 00:12:28.930 | 30.00th=[ 15], 40.00th=[ 17], 50.00th=[ 18], 60.00th=[ 20], 00:12:28.930 | 70.00th=[ 22], 80.00th=[ 27], 90.00th=[ 40], 95.00th=[ 71], 00:12:28.930 | 99.00th=[ 126], 99.50th=[ 128], 99.90th=[ 138], 99.95th=[ 138], 00:12:28.930 | 99.99th=[ 138] 00:12:28.930 write: IOPS=59, BW=7572KiB/s (7754kB/s)(63.6MiB/8604msec); 0 zone resets 00:12:28.930 slat (usec): min=43, max=8290, avg=160.55, stdev=404.15 00:12:28.930 clat (msec): min=65, max=538, avg=133.37, stdev=72.05 00:12:28.930 lat (msec): min=67, max=539, avg=133.54, stdev=72.06 00:12:28.930 clat percentiles (msec): 00:12:28.930 | 1.00th=[ 73], 5.00th=[ 85], 10.00th=[ 86], 20.00th=[ 88], 00:12:28.930 | 30.00th=[ 91], 40.00th=[ 100], 50.00th=[ 107], 60.00th=[ 120], 00:12:28.930 | 70.00th=[ 140], 80.00th=[ 159], 90.00th=[ 207], 95.00th=[ 279], 00:12:28.930 | 99.00th=[ 439], 99.50th=[ 493], 99.90th=[ 542], 99.95th=[ 542], 00:12:28.930 | 99.99th=[ 542] 00:12:28.930 bw ( KiB/s): min= 1280, max=11008, per=0.81%, avg=6749.11, stdev=3222.14, samples=19 00:12:28.930 iops : min= 10, max= 86, avg=52.63, stdev=25.20, samples=19 00:12:28.930 lat (msec) : 10=3.74%, 20=27.20%, 50=14.05%, 100=23.46%, 250=28.31% 00:12:28.930 lat (msec) : 500=3.03%, 750=0.20% 00:12:28.930 cpu : usr=0.44%, sys=0.19%, ctx=1647, majf=0, minf=5 00:12:28.930 IO depths : 1=0.7%, 2=1.4%, 4=2.8%, 8=95.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:28.930 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:28.930 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:28.931 issued rwts: total=480,509,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:28.931 latency : target=0, window=0, percentile=100.00%, depth=8 00:12:28.931 job42: (groupid=0, jobs=1): err= 0: pid=70988: Tue Jul 23 02:07:37 2024 00:12:28.931 read: IOPS=58, BW=7469KiB/s (7648kB/s)(60.0MiB/8226msec) 00:12:28.931 slat (usec): min=7, max=1374, avg=82.36, stdev=151.73 00:12:28.931 clat (msec): min=10, max=162, avg=31.70, stdev=17.67 00:12:28.931 lat (msec): min=11, max=162, avg=31.79, stdev=17.65 00:12:28.931 clat percentiles (msec): 00:12:28.931 | 1.00th=[ 13], 5.00th=[ 16], 10.00th=[ 18], 20.00th=[ 21], 00:12:28.931 | 30.00th=[ 24], 40.00th=[ 27], 50.00th=[ 29], 60.00th=[ 31], 00:12:28.931 | 70.00th=[ 33], 80.00th=[ 38], 90.00th=[ 51], 95.00th=[ 58], 00:12:28.931 | 99.00th=[ 121], 99.50th=[ 140], 99.90th=[ 163], 99.95th=[ 163], 00:12:28.931 | 99.99th=[ 163] 00:12:28.931 write: IOPS=67, BW=8614KiB/s (8820kB/s)(68.8MiB/8173msec); 0 zone resets 00:12:28.931 slat (usec): min=42, max=1720, avg=142.63, stdev=171.00 00:12:28.931 clat (msec): min=7, max=460, avg=117.85, stdev=55.01 00:12:28.931 lat (msec): min=7, max=460, avg=117.99, stdev=55.01 00:12:28.931 clat percentiles (msec): 00:12:28.931 | 1.00th=[ 10], 5.00th=[ 83], 10.00th=[ 85], 20.00th=[ 87], 00:12:28.931 | 30.00th=[ 91], 40.00th=[ 95], 50.00th=[ 104], 60.00th=[ 115], 00:12:28.931 | 70.00th=[ 125], 80.00th=[ 142], 90.00th=[ 169], 95.00th=[ 209], 00:12:28.931 | 99.00th=[ 372], 99.50th=[ 388], 99.90th=[ 460], 99.95th=[ 460], 00:12:28.931 | 99.99th=[ 460] 00:12:28.931 bw ( KiB/s): min= 255, max=14080, per=0.92%, avg=7721.67, stdev=3596.07, samples=18 00:12:28.931 iops : min= 1, max= 110, avg=60.22, stdev=28.21, samples=18 00:12:28.931 lat (msec) : 10=0.58%, 20=10.00%, 50=33.40%, 100=26.60%, 250=27.48% 00:12:28.931 lat (msec) : 500=1.94% 00:12:28.931 cpu : usr=0.48%, sys=0.20%, ctx=1696, majf=0, minf=1 00:12:28.931 IO depths : 1=0.7%, 2=1.4%, 4=2.7%, 8=95.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:28.931 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:28.931 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:28.931 issued rwts: total=480,550,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:28.931 latency : target=0, window=0, percentile=100.00%, depth=8 00:12:28.931 job43: (groupid=0, jobs=1): err= 0: pid=70989: Tue Jul 23 02:07:37 2024 00:12:28.931 read: IOPS=66, BW=8466KiB/s (8669kB/s)(60.0MiB/7257msec) 00:12:28.931 slat (usec): min=7, max=558, avg=51.33, stdev=72.83 00:12:28.931 clat (msec): min=8, max=301, avg=22.58, stdev=35.27 00:12:28.931 lat (msec): min=8, max=301, avg=22.63, stdev=35.27 00:12:28.931 clat percentiles (msec): 00:12:28.931 | 1.00th=[ 9], 5.00th=[ 9], 10.00th=[ 10], 20.00th=[ 11], 00:12:28.931 | 30.00th=[ 12], 40.00th=[ 14], 50.00th=[ 16], 60.00th=[ 18], 00:12:28.931 | 70.00th=[ 22], 80.00th=[ 24], 90.00th=[ 30], 95.00th=[ 44], 00:12:28.931 | 99.00th=[ 279], 99.50th=[ 292], 99.90th=[ 300], 99.95th=[ 300], 00:12:28.931 | 99.99th=[ 300] 00:12:28.931 write: IOPS=59, BW=7679KiB/s (7864kB/s)(65.1MiB/8684msec); 0 zone resets 00:12:28.931 slat (usec): min=44, max=3021, avg=140.54, stdev=215.03 00:12:28.931 clat (msec): min=74, max=458, avg=131.82, stdev=60.52 00:12:28.931 lat (msec): min=75, max=459, avg=131.96, stdev=60.52 00:12:28.931 clat percentiles (msec): 00:12:28.931 | 1.00th=[ 81], 5.00th=[ 86], 10.00th=[ 87], 20.00th=[ 88], 00:12:28.931 | 30.00th=[ 93], 40.00th=[ 100], 50.00th=[ 110], 60.00th=[ 126], 00:12:28.931 | 70.00th=[ 142], 80.00th=[ 163], 90.00th=[ 199], 95.00th=[ 271], 00:12:28.931 | 99.00th=[ 359], 99.50th=[ 422], 99.90th=[ 460], 99.95th=[ 460], 00:12:28.931 | 99.99th=[ 460] 00:12:28.931 bw ( KiB/s): min= 1788, max=10240, per=0.83%, avg=6921.11, stdev=2866.61, samples=19 00:12:28.931 iops : min= 13, max= 80, avg=53.95, stdev=22.51, samples=19 00:12:28.931 lat (msec) : 10=7.99%, 20=23.08%, 50=14.79%, 100=22.58%, 250=27.77% 00:12:28.931 lat (msec) : 500=3.80% 00:12:28.931 cpu : usr=0.39%, sys=0.23%, ctx=1699, majf=0, minf=5 00:12:28.931 IO depths : 1=0.7%, 2=1.4%, 4=2.8%, 8=95.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:28.931 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:28.931 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:28.931 issued rwts: total=480,521,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:28.931 latency : target=0, window=0, percentile=100.00%, depth=8 00:12:28.931 job44: (groupid=0, jobs=1): err= 0: pid=70990: Tue Jul 23 02:07:37 2024 00:12:28.931 read: IOPS=57, BW=7391KiB/s (7568kB/s)(60.0MiB/8313msec) 00:12:28.931 slat (usec): min=7, max=1568, avg=86.92, stdev=161.95 00:12:28.931 clat (usec): min=5598, max=61863, avg=19647.97, stdev=9069.04 00:12:28.931 lat (usec): min=5784, max=61890, avg=19734.89, stdev=9102.83 00:12:28.931 clat percentiles (usec): 00:12:28.931 | 1.00th=[ 8979], 5.00th=[10028], 10.00th=[10814], 20.00th=[12518], 00:12:28.931 | 30.00th=[13960], 40.00th=[15533], 50.00th=[17171], 60.00th=[19792], 00:12:28.931 | 70.00th=[22152], 80.00th=[25297], 90.00th=[30278], 95.00th=[36439], 00:12:28.931 | 99.00th=[55313], 99.50th=[56361], 99.90th=[61604], 99.95th=[61604], 00:12:28.931 | 99.99th=[61604] 00:12:28.931 write: IOPS=62, BW=7943KiB/s (8134kB/s)(69.0MiB/8895msec); 0 zone resets 00:12:28.931 slat (usec): min=45, max=2734, avg=146.75, stdev=194.35 00:12:28.931 clat (msec): min=8, max=546, avg=128.00, stdev=64.47 00:12:28.931 lat (msec): min=9, max=546, avg=128.15, stdev=64.46 00:12:28.931 clat percentiles (msec): 00:12:28.931 | 1.00th=[ 14], 5.00th=[ 84], 10.00th=[ 86], 20.00th=[ 89], 00:12:28.931 | 30.00th=[ 94], 40.00th=[ 101], 50.00th=[ 109], 60.00th=[ 118], 00:12:28.931 | 70.00th=[ 133], 80.00th=[ 163], 90.00th=[ 201], 95.00th=[ 253], 00:12:28.931 | 99.00th=[ 422], 99.50th=[ 451], 99.90th=[ 550], 99.95th=[ 550], 00:12:28.931 | 99.99th=[ 550] 00:12:28.931 bw ( KiB/s): min= 510, max=13568, per=0.88%, avg=7342.26, stdev=3343.61, samples=19 00:12:28.931 iops : min= 3, max= 106, avg=57.26, stdev=26.24, samples=19 00:12:28.931 lat (msec) : 10=2.52%, 20=26.94%, 50=17.83%, 100=20.16%, 250=29.65% 00:12:28.931 lat (msec) : 500=2.71%, 750=0.19% 00:12:28.931 cpu : usr=0.45%, sys=0.22%, ctx=1752, majf=0, minf=1 00:12:28.931 IO depths : 1=0.7%, 2=1.4%, 4=2.7%, 8=95.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:28.931 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:28.931 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:28.931 issued rwts: total=480,552,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:28.931 latency : target=0, window=0, percentile=100.00%, depth=8 00:12:28.931 job45: (groupid=0, jobs=1): err= 0: pid=70991: Tue Jul 23 02:07:37 2024 00:12:28.931 read: IOPS=58, BW=7523KiB/s (7704kB/s)(60.0MiB/8167msec) 00:12:28.931 slat (usec): min=7, max=1688, avg=69.05, stdev=148.33 00:12:28.931 clat (usec): min=9157, max=72984, avg=23474.20, stdev=10576.83 00:12:28.931 lat (usec): min=9170, max=73002, avg=23543.25, stdev=10576.12 00:12:28.931 clat percentiles (usec): 00:12:28.931 | 1.00th=[ 9634], 5.00th=[10552], 10.00th=[11994], 20.00th=[14484], 00:12:28.931 | 30.00th=[16319], 40.00th=[18744], 50.00th=[21103], 60.00th=[25035], 00:12:28.931 | 70.00th=[28181], 80.00th=[29754], 90.00th=[36963], 95.00th=[44303], 00:12:28.931 | 99.00th=[60031], 99.50th=[61080], 99.90th=[72877], 99.95th=[72877], 00:12:28.931 | 99.99th=[72877] 00:12:28.931 write: IOPS=64, BW=8232KiB/s (8430kB/s)(69.5MiB/8645msec); 0 zone resets 00:12:28.931 slat (usec): min=41, max=11614, avg=157.89, stdev=510.36 00:12:28.931 clat (msec): min=23, max=438, avg=123.18, stdev=60.60 00:12:28.931 lat (msec): min=24, max=438, avg=123.34, stdev=60.56 00:12:28.931 clat percentiles (msec): 00:12:28.931 | 1.00th=[ 31], 5.00th=[ 85], 10.00th=[ 85], 20.00th=[ 87], 00:12:28.931 | 30.00th=[ 91], 40.00th=[ 95], 50.00th=[ 104], 60.00th=[ 112], 00:12:28.931 | 70.00th=[ 128], 80.00th=[ 142], 90.00th=[ 190], 95.00th=[ 236], 00:12:28.931 | 99.00th=[ 401], 99.50th=[ 430], 99.90th=[ 439], 99.95th=[ 439], 00:12:28.931 | 99.99th=[ 439] 00:12:28.931 bw ( KiB/s): min= 1792, max=11264, per=0.88%, avg=7395.37, stdev=3466.89, samples=19 00:12:28.931 iops : min= 14, max= 88, avg=57.63, stdev=27.03, samples=19 00:12:28.931 lat (msec) : 10=1.45%, 20=19.11%, 50=25.19%, 100=25.77%, 250=25.97% 00:12:28.931 lat (msec) : 500=2.51% 00:12:28.931 cpu : usr=0.44%, sys=0.24%, ctx=1666, majf=0, minf=5 00:12:28.931 IO depths : 1=0.7%, 2=1.4%, 4=2.7%, 8=95.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:28.931 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:28.931 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:28.931 issued rwts: total=480,556,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:28.931 latency : target=0, window=0, percentile=100.00%, depth=8 00:12:28.931 job46: (groupid=0, jobs=1): err= 0: pid=70992: Tue Jul 23 02:07:37 2024 00:12:28.931 read: IOPS=44, BW=5758KiB/s (5897kB/s)(40.0MiB/7113msec) 00:12:28.931 slat (usec): min=7, max=450, avg=45.51, stdev=65.49 00:12:28.931 clat (msec): min=7, max=102, avg=19.34, stdev=14.63 00:12:28.931 lat (msec): min=8, max=102, avg=19.39, stdev=14.63 00:12:28.931 clat percentiles (msec): 00:12:28.931 | 1.00th=[ 9], 5.00th=[ 9], 10.00th=[ 11], 20.00th=[ 12], 00:12:28.931 | 30.00th=[ 13], 40.00th=[ 14], 50.00th=[ 16], 60.00th=[ 17], 00:12:28.931 | 70.00th=[ 19], 80.00th=[ 22], 90.00th=[ 34], 95.00th=[ 39], 00:12:28.931 | 99.00th=[ 99], 99.50th=[ 101], 99.90th=[ 103], 99.95th=[ 103], 00:12:28.931 | 99.99th=[ 103] 00:12:28.931 write: IOPS=51, BW=6652KiB/s (6811kB/s)(60.0MiB/9237msec); 0 zone resets 00:12:28.931 slat (usec): min=45, max=4661, avg=159.07, stdev=310.92 00:12:28.931 clat (msec): min=84, max=447, avg=152.84, stdev=78.32 00:12:28.931 lat (msec): min=84, max=447, avg=153.00, stdev=78.32 00:12:28.931 clat percentiles (msec): 00:12:28.931 | 1.00th=[ 85], 5.00th=[ 87], 10.00th=[ 89], 20.00th=[ 92], 00:12:28.931 | 30.00th=[ 97], 40.00th=[ 105], 50.00th=[ 123], 60.00th=[ 138], 00:12:28.931 | 70.00th=[ 174], 80.00th=[ 209], 90.00th=[ 268], 95.00th=[ 321], 00:12:28.931 | 99.00th=[ 435], 99.50th=[ 447], 99.90th=[ 447], 99.95th=[ 447], 00:12:28.932 | 99.99th=[ 447] 00:12:28.932 bw ( KiB/s): min= 1272, max=10645, per=0.72%, avg=6060.84, stdev=3102.28, samples=19 00:12:28.932 iops : min= 9, max= 83, avg=46.79, stdev=24.44, samples=19 00:12:28.932 lat (msec) : 10=3.62%, 20=26.50%, 50=8.75%, 100=21.62%, 250=31.75% 00:12:28.932 lat (msec) : 500=7.75% 00:12:28.932 cpu : usr=0.34%, sys=0.16%, ctx=1350, majf=0, minf=4 00:12:28.932 IO depths : 1=0.6%, 2=1.2%, 4=2.5%, 8=95.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:28.932 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:28.932 complete : 0=0.0%, 4=99.4%, 8=0.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:28.932 issued rwts: total=320,480,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:28.932 latency : target=0, window=0, percentile=100.00%, depth=8 00:12:28.932 job47: (groupid=0, jobs=1): err= 0: pid=70993: Tue Jul 23 02:07:37 2024 00:12:28.932 read: IOPS=60, BW=7702KiB/s (7887kB/s)(60.0MiB/7977msec) 00:12:28.932 slat (usec): min=7, max=1720, avg=60.72, stdev=138.22 00:12:28.932 clat (usec): min=13717, max=78415, avg=27735.63, stdev=10410.91 00:12:28.932 lat (usec): min=13741, max=78423, avg=27796.35, stdev=10412.40 00:12:28.932 clat percentiles (usec): 00:12:28.932 | 1.00th=[13960], 5.00th=[15926], 10.00th=[17957], 20.00th=[20579], 00:12:28.932 | 30.00th=[21890], 40.00th=[22676], 50.00th=[24249], 60.00th=[27132], 00:12:28.932 | 70.00th=[29492], 80.00th=[32900], 90.00th=[42206], 95.00th=[49021], 00:12:28.932 | 99.00th=[66847], 99.50th=[69731], 99.90th=[78119], 99.95th=[78119], 00:12:28.932 | 99.99th=[78119] 00:12:28.932 write: IOPS=65, BW=8352KiB/s (8553kB/s)(68.5MiB/8398msec); 0 zone resets 00:12:28.932 slat (usec): min=40, max=2114, avg=150.04, stdev=199.07 00:12:28.932 clat (msec): min=35, max=435, avg=121.31, stdev=56.65 00:12:28.932 lat (msec): min=35, max=435, avg=121.46, stdev=56.65 00:12:28.932 clat percentiles (msec): 00:12:28.932 | 1.00th=[ 42], 5.00th=[ 85], 10.00th=[ 85], 20.00th=[ 87], 00:12:28.932 | 30.00th=[ 90], 40.00th=[ 93], 50.00th=[ 101], 60.00th=[ 113], 00:12:28.932 | 70.00th=[ 125], 80.00th=[ 144], 90.00th=[ 174], 95.00th=[ 241], 00:12:28.932 | 99.00th=[ 376], 99.50th=[ 426], 99.90th=[ 435], 99.95th=[ 435], 00:12:28.932 | 99.99th=[ 435] 00:12:28.932 bw ( KiB/s): min= 1536, max=11520, per=0.92%, avg=7677.22, stdev=3194.61, samples=18 00:12:28.932 iops : min= 12, max= 90, avg=59.83, stdev=25.07, samples=18 00:12:28.932 lat (msec) : 20=8.07%, 50=37.16%, 100=28.21%, 250=24.12%, 500=2.43% 00:12:28.932 cpu : usr=0.45%, sys=0.22%, ctx=1730, majf=0, minf=7 00:12:28.932 IO depths : 1=0.7%, 2=1.4%, 4=2.7%, 8=95.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:28.932 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:28.932 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:28.932 issued rwts: total=480,548,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:28.932 latency : target=0, window=0, percentile=100.00%, depth=8 00:12:28.932 job48: (groupid=0, jobs=1): err= 0: pid=70994: Tue Jul 23 02:07:37 2024 00:12:28.932 read: IOPS=49, BW=6350KiB/s (6503kB/s)(40.0MiB/6450msec) 00:12:28.932 slat (usec): min=6, max=1182, avg=80.86, stdev=172.58 00:12:28.932 clat (msec): min=5, max=188, avg=33.50, stdev=33.61 00:12:28.932 lat (msec): min=5, max=188, avg=33.58, stdev=33.60 00:12:28.932 clat percentiles (msec): 00:12:28.932 | 1.00th=[ 10], 5.00th=[ 11], 10.00th=[ 12], 20.00th=[ 15], 00:12:28.932 | 30.00th=[ 18], 40.00th=[ 19], 50.00th=[ 22], 60.00th=[ 24], 00:12:28.932 | 70.00th=[ 29], 80.00th=[ 41], 90.00th=[ 75], 95.00th=[ 104], 00:12:28.932 | 99.00th=[ 161], 99.50th=[ 163], 99.90th=[ 188], 99.95th=[ 188], 00:12:28.932 | 99.99th=[ 188] 00:12:28.932 write: IOPS=47, BW=6110KiB/s (6257kB/s)(52.0MiB/8715msec); 0 zone resets 00:12:28.932 slat (usec): min=40, max=1500, avg=146.27, stdev=151.99 00:12:28.932 clat (msec): min=82, max=519, avg=166.62, stdev=69.28 00:12:28.932 lat (msec): min=82, max=519, avg=166.77, stdev=69.28 00:12:28.932 clat percentiles (msec): 00:12:28.932 | 1.00th=[ 86], 5.00th=[ 90], 10.00th=[ 95], 20.00th=[ 108], 00:12:28.932 | 30.00th=[ 118], 40.00th=[ 136], 50.00th=[ 150], 60.00th=[ 169], 00:12:28.932 | 70.00th=[ 190], 80.00th=[ 222], 90.00th=[ 257], 95.00th=[ 300], 00:12:28.932 | 99.00th=[ 376], 99.50th=[ 464], 99.90th=[ 518], 99.95th=[ 518], 00:12:28.932 | 99.99th=[ 518] 00:12:28.932 bw ( KiB/s): min= 1788, max= 9197, per=0.66%, avg=5493.37, stdev=2411.81, samples=19 00:12:28.932 iops : min= 13, max= 71, avg=42.63, stdev=18.75, samples=19 00:12:28.932 lat (msec) : 10=0.95%, 20=20.11%, 50=16.58%, 100=11.01%, 250=44.16% 00:12:28.932 lat (msec) : 500=7.07%, 750=0.14% 00:12:28.932 cpu : usr=0.32%, sys=0.16%, ctx=1278, majf=0, minf=7 00:12:28.932 IO depths : 1=0.7%, 2=1.4%, 4=2.7%, 8=95.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:28.932 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:28.932 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:28.932 issued rwts: total=320,416,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:28.932 latency : target=0, window=0, percentile=100.00%, depth=8 00:12:28.932 job49: (groupid=0, jobs=1): err= 0: pid=70995: Tue Jul 23 02:07:37 2024 00:12:28.932 read: IOPS=61, BW=7848KiB/s (8036kB/s)(60.0MiB/7829msec) 00:12:28.932 slat (usec): min=7, max=804, avg=56.07, stdev=85.56 00:12:28.932 clat (usec): min=10361, max=70029, avg=26045.71, stdev=10781.53 00:12:28.932 lat (usec): min=10380, max=70092, avg=26101.78, stdev=10772.81 00:12:28.932 clat percentiles (usec): 00:12:28.932 | 1.00th=[11207], 5.00th=[13173], 10.00th=[15270], 20.00th=[18220], 00:12:28.932 | 30.00th=[20841], 40.00th=[22414], 50.00th=[24249], 60.00th=[26608], 00:12:28.932 | 70.00th=[27919], 80.00th=[29754], 90.00th=[38536], 95.00th=[50070], 00:12:28.932 | 99.00th=[65274], 99.50th=[67634], 99.90th=[69731], 99.95th=[69731], 00:12:28.932 | 99.99th=[69731] 00:12:28.932 write: IOPS=62, BW=8029KiB/s (8222kB/s)(66.6MiB/8497msec); 0 zone resets 00:12:28.932 slat (usec): min=29, max=2386, avg=144.02, stdev=181.92 00:12:28.932 clat (msec): min=55, max=421, avg=125.87, stdev=56.85 00:12:28.932 lat (msec): min=55, max=421, avg=126.01, stdev=56.85 00:12:28.932 clat percentiles (msec): 00:12:28.932 | 1.00th=[ 64], 5.00th=[ 84], 10.00th=[ 85], 20.00th=[ 87], 00:12:28.932 | 30.00th=[ 92], 40.00th=[ 97], 50.00th=[ 109], 60.00th=[ 121], 00:12:28.932 | 70.00th=[ 138], 80.00th=[ 155], 90.00th=[ 174], 95.00th=[ 239], 00:12:28.932 | 99.00th=[ 376], 99.50th=[ 401], 99.90th=[ 422], 99.95th=[ 422], 00:12:28.932 | 99.99th=[ 422] 00:12:28.932 bw ( KiB/s): min= 1792, max=12032, per=0.85%, avg=7070.84, stdev=3231.02, samples=19 00:12:28.932 iops : min= 14, max= 94, avg=55.16, stdev=25.24, samples=19 00:12:28.932 lat (msec) : 20=11.94%, 50=32.97%, 100=24.88%, 250=27.94%, 500=2.27% 00:12:28.932 cpu : usr=0.45%, sys=0.18%, ctx=1705, majf=0, minf=1 00:12:28.932 IO depths : 1=0.7%, 2=1.4%, 4=2.8%, 8=95.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:28.932 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:28.932 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:28.932 issued rwts: total=480,533,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:28.932 latency : target=0, window=0, percentile=100.00%, depth=8 00:12:28.932 job50: (groupid=0, jobs=1): err= 0: pid=70996: Tue Jul 23 02:07:37 2024 00:12:28.932 read: IOPS=75, BW=9656KiB/s (9888kB/s)(80.0MiB/8484msec) 00:12:28.932 slat (usec): min=7, max=1828, avg=60.13, stdev=144.77 00:12:28.932 clat (msec): min=4, max=111, avg=16.93, stdev=12.98 00:12:28.932 lat (msec): min=4, max=111, avg=16.99, stdev=12.99 00:12:28.932 clat percentiles (msec): 00:12:28.932 | 1.00th=[ 6], 5.00th=[ 7], 10.00th=[ 8], 20.00th=[ 9], 00:12:28.932 | 30.00th=[ 10], 40.00th=[ 13], 50.00th=[ 14], 60.00th=[ 16], 00:12:28.932 | 70.00th=[ 20], 80.00th=[ 22], 90.00th=[ 28], 95.00th=[ 36], 00:12:28.932 | 99.00th=[ 87], 99.50th=[ 95], 99.90th=[ 112], 99.95th=[ 112], 00:12:28.932 | 99.99th=[ 112] 00:12:28.932 write: IOPS=92, BW=11.5MiB/s (12.1MB/s)(100MiB/8660msec); 0 zone resets 00:12:28.932 slat (usec): min=42, max=2717, avg=140.33, stdev=209.23 00:12:28.932 clat (usec): min=1372, max=339343, avg=85536.55, stdev=39419.58 00:12:28.932 lat (msec): min=2, max=339, avg=85.68, stdev=39.42 00:12:28.932 clat percentiles (msec): 00:12:28.932 | 1.00th=[ 8], 5.00th=[ 58], 10.00th=[ 61], 20.00th=[ 64], 00:12:28.932 | 30.00th=[ 68], 40.00th=[ 71], 50.00th=[ 75], 60.00th=[ 80], 00:12:28.932 | 70.00th=[ 85], 80.00th=[ 95], 90.00th=[ 122], 95.00th=[ 178], 00:12:28.932 | 99.00th=[ 257], 99.50th=[ 271], 99.90th=[ 338], 99.95th=[ 338], 00:12:28.932 | 99.99th=[ 338] 00:12:28.932 bw ( KiB/s): min= 1792, max=18468, per=1.21%, avg=10149.35, stdev=4888.84, samples=20 00:12:28.932 iops : min= 14, max= 144, avg=79.20, stdev=38.11, samples=20 00:12:28.932 lat (msec) : 2=0.07%, 4=0.21%, 10=14.31%, 20=19.24%, 50=10.76% 00:12:28.932 lat (msec) : 100=46.18%, 250=8.54%, 500=0.69% 00:12:28.932 cpu : usr=0.67%, sys=0.31%, ctx=2258, majf=0, minf=5 00:12:28.932 IO depths : 1=0.6%, 2=1.2%, 4=2.5%, 8=95.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:28.932 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:28.932 complete : 0=0.0%, 4=99.4%, 8=0.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:28.932 issued rwts: total=640,800,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:28.932 latency : target=0, window=0, percentile=100.00%, depth=8 00:12:28.932 job51: (groupid=0, jobs=1): err= 0: pid=70997: Tue Jul 23 02:07:37 2024 00:12:28.933 read: IOPS=74, BW=9599KiB/s (9830kB/s)(80.0MiB/8534msec) 00:12:28.933 slat (usec): min=6, max=863, avg=51.97, stdev=93.47 00:12:28.933 clat (usec): min=6365, max=68948, avg=16276.07, stdev=7585.07 00:12:28.933 lat (usec): min=6496, max=68993, avg=16328.04, stdev=7583.41 00:12:28.933 clat percentiles (usec): 00:12:28.933 | 1.00th=[ 8291], 5.00th=[ 9372], 10.00th=[10552], 20.00th=[11469], 00:12:28.933 | 30.00th=[12256], 40.00th=[13566], 50.00th=[14877], 60.00th=[15926], 00:12:28.933 | 70.00th=[17433], 80.00th=[19006], 90.00th=[21890], 95.00th=[26084], 00:12:28.933 | 99.00th=[55313], 99.50th=[58983], 99.90th=[68682], 99.95th=[68682], 00:12:28.933 | 99.99th=[68682] 00:12:28.933 write: IOPS=91, BW=11.5MiB/s (12.0MB/s)(100MiB/8711msec); 0 zone resets 00:12:28.933 slat (usec): min=40, max=1716, avg=153.40, stdev=186.24 00:12:28.933 clat (msec): min=8, max=266, avg=86.27, stdev=33.66 00:12:28.933 lat (msec): min=8, max=266, avg=86.43, stdev=33.66 00:12:28.933 clat percentiles (msec): 00:12:28.933 | 1.00th=[ 18], 5.00th=[ 59], 10.00th=[ 60], 20.00th=[ 64], 00:12:28.933 | 30.00th=[ 68], 40.00th=[ 72], 50.00th=[ 79], 60.00th=[ 84], 00:12:28.933 | 70.00th=[ 91], 80.00th=[ 101], 90.00th=[ 120], 95.00th=[ 153], 00:12:28.933 | 99.00th=[ 220], 99.50th=[ 232], 99.90th=[ 268], 99.95th=[ 268], 00:12:28.933 | 99.99th=[ 268] 00:12:28.933 bw ( KiB/s): min= 768, max=16896, per=1.21%, avg=10147.30, stdev=4482.76, samples=20 00:12:28.933 iops : min= 6, max= 132, avg=79.15, stdev=35.03, samples=20 00:12:28.933 lat (msec) : 10=3.06%, 20=34.86%, 50=6.53%, 100=44.17%, 250=11.25% 00:12:28.933 lat (msec) : 500=0.14% 00:12:28.933 cpu : usr=0.76%, sys=0.21%, ctx=2342, majf=0, minf=3 00:12:28.933 IO depths : 1=0.6%, 2=1.2%, 4=2.5%, 8=95.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:28.933 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:28.933 complete : 0=0.0%, 4=99.4%, 8=0.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:28.933 issued rwts: total=640,800,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:28.933 latency : target=0, window=0, percentile=100.00%, depth=8 00:12:28.933 job52: (groupid=0, jobs=1): err= 0: pid=70999: Tue Jul 23 02:07:37 2024 00:12:28.933 read: IOPS=76, BW=9728KiB/s (9962kB/s)(80.0MiB/8421msec) 00:12:28.933 slat (usec): min=7, max=1473, avg=65.14, stdev=132.45 00:12:28.933 clat (usec): min=5651, max=44188, avg=15260.02, stdev=6961.59 00:12:28.933 lat (usec): min=5684, max=44199, avg=15325.16, stdev=6943.04 00:12:28.933 clat percentiles (usec): 00:12:28.933 | 1.00th=[ 6194], 5.00th=[ 7767], 10.00th=[ 8586], 20.00th=[ 9372], 00:12:28.933 | 30.00th=[10552], 40.00th=[11600], 50.00th=[13042], 60.00th=[14615], 00:12:28.933 | 70.00th=[17957], 80.00th=[20317], 90.00th=[25560], 95.00th=[28705], 00:12:28.933 | 99.00th=[39060], 99.50th=[40633], 99.90th=[44303], 99.95th=[44303], 00:12:28.933 | 99.99th=[44303] 00:12:28.933 write: IOPS=87, BW=10.9MiB/s (11.5MB/s)(96.0MiB/8771msec); 0 zone resets 00:12:28.933 slat (usec): min=44, max=1294, avg=134.79, stdev=144.59 00:12:28.933 clat (msec): min=52, max=277, avg=90.60, stdev=35.68 00:12:28.933 lat (msec): min=52, max=277, avg=90.74, stdev=35.69 00:12:28.933 clat percentiles (msec): 00:12:28.933 | 1.00th=[ 57], 5.00th=[ 59], 10.00th=[ 61], 20.00th=[ 65], 00:12:28.933 | 30.00th=[ 69], 40.00th=[ 73], 50.00th=[ 80], 60.00th=[ 86], 00:12:28.933 | 70.00th=[ 93], 80.00th=[ 110], 90.00th=[ 142], 95.00th=[ 167], 00:12:28.933 | 99.00th=[ 220], 99.50th=[ 228], 99.90th=[ 279], 99.95th=[ 279], 00:12:28.933 | 99.99th=[ 279] 00:12:28.933 bw ( KiB/s): min= 1792, max=15390, per=1.16%, avg=9721.70, stdev=4072.14, samples=20 00:12:28.933 iops : min= 14, max= 120, avg=75.80, stdev=31.79, samples=20 00:12:28.933 lat (msec) : 10=11.65%, 20=23.65%, 50=10.16%, 100=41.34%, 250=13.00% 00:12:28.933 lat (msec) : 500=0.21% 00:12:28.933 cpu : usr=0.67%, sys=0.28%, ctx=2366, majf=0, minf=7 00:12:28.933 IO depths : 1=0.6%, 2=1.3%, 4=2.6%, 8=95.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:28.933 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:28.933 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:28.933 issued rwts: total=640,768,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:28.933 latency : target=0, window=0, percentile=100.00%, depth=8 00:12:28.933 job53: (groupid=0, jobs=1): err= 0: pid=71003: Tue Jul 23 02:07:37 2024 00:12:28.933 read: IOPS=76, BW=9842KiB/s (10.1MB/s)(81.5MiB/8480msec) 00:12:28.933 slat (usec): min=6, max=1410, avg=56.22, stdev=100.27 00:12:28.933 clat (usec): min=7625, max=66645, avg=18450.17, stdev=8544.25 00:12:28.933 lat (usec): min=7655, max=66661, avg=18506.39, stdev=8542.38 00:12:28.933 clat percentiles (usec): 00:12:28.933 | 1.00th=[ 8356], 5.00th=[ 9110], 10.00th=[10159], 20.00th=[11600], 00:12:28.933 | 30.00th=[13566], 40.00th=[14746], 50.00th=[16319], 60.00th=[19006], 00:12:28.933 | 70.00th=[21103], 80.00th=[23200], 90.00th=[27395], 95.00th=[34866], 00:12:28.933 | 99.00th=[51643], 99.50th=[57934], 99.90th=[66847], 99.95th=[66847], 00:12:28.933 | 99.99th=[66847] 00:12:28.933 write: IOPS=94, BW=11.8MiB/s (12.3MB/s)(100MiB/8490msec); 0 zone resets 00:12:28.933 slat (usec): min=34, max=3276, avg=148.71, stdev=206.81 00:12:28.933 clat (msec): min=20, max=221, avg=83.95, stdev=29.65 00:12:28.933 lat (msec): min=21, max=221, avg=84.10, stdev=29.65 00:12:28.933 clat percentiles (msec): 00:12:28.933 | 1.00th=[ 25], 5.00th=[ 59], 10.00th=[ 61], 20.00th=[ 63], 00:12:28.933 | 30.00th=[ 68], 40.00th=[ 73], 50.00th=[ 77], 60.00th=[ 82], 00:12:28.933 | 70.00th=[ 87], 80.00th=[ 97], 90.00th=[ 117], 95.00th=[ 144], 00:12:28.933 | 99.00th=[ 201], 99.50th=[ 207], 99.90th=[ 222], 99.95th=[ 222], 00:12:28.933 | 99.99th=[ 222] 00:12:28.933 bw ( KiB/s): min= 2304, max=15104, per=1.22%, avg=10197.58, stdev=4447.24, samples=19 00:12:28.933 iops : min= 18, max= 118, avg=79.53, stdev=34.72, samples=19 00:12:28.933 lat (msec) : 10=4.13%, 20=24.79%, 50=16.05%, 100=45.25%, 250=9.78% 00:12:28.933 cpu : usr=0.65%, sys=0.30%, ctx=2437, majf=0, minf=1 00:12:28.933 IO depths : 1=0.7%, 2=1.4%, 4=2.8%, 8=95.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:28.933 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:28.933 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:28.933 issued rwts: total=652,800,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:28.933 latency : target=0, window=0, percentile=100.00%, depth=8 00:12:28.933 job54: (groupid=0, jobs=1): err= 0: pid=71004: Tue Jul 23 02:07:37 2024 00:12:28.933 read: IOPS=81, BW=10.2MiB/s (10.7MB/s)(80.0MiB/7850msec) 00:12:28.933 slat (usec): min=6, max=855, avg=61.43, stdev=105.06 00:12:28.933 clat (msec): min=4, max=217, avg=18.95, stdev=27.90 00:12:28.933 lat (msec): min=4, max=217, avg=19.01, stdev=27.91 00:12:28.933 clat percentiles (msec): 00:12:28.933 | 1.00th=[ 6], 5.00th=[ 7], 10.00th=[ 8], 20.00th=[ 9], 00:12:28.933 | 30.00th=[ 10], 40.00th=[ 11], 50.00th=[ 13], 60.00th=[ 15], 00:12:28.933 | 70.00th=[ 17], 80.00th=[ 21], 90.00th=[ 27], 95.00th=[ 43], 00:12:28.933 | 99.00th=[ 213], 99.50th=[ 215], 99.90th=[ 218], 99.95th=[ 218], 00:12:28.933 | 99.99th=[ 218] 00:12:28.933 write: IOPS=78, BW=9.83MiB/s (10.3MB/s)(83.5MiB/8497msec); 0 zone resets 00:12:28.933 slat (usec): min=39, max=1618, avg=151.77, stdev=175.67 00:12:28.933 clat (msec): min=41, max=302, avg=101.02, stdev=40.34 00:12:28.933 lat (msec): min=41, max=302, avg=101.17, stdev=40.36 00:12:28.933 clat percentiles (msec): 00:12:28.933 | 1.00th=[ 57], 5.00th=[ 61], 10.00th=[ 63], 20.00th=[ 67], 00:12:28.933 | 30.00th=[ 72], 40.00th=[ 83], 50.00th=[ 91], 60.00th=[ 101], 00:12:28.933 | 70.00th=[ 112], 80.00th=[ 131], 90.00th=[ 161], 95.00th=[ 180], 00:12:28.933 | 99.00th=[ 234], 99.50th=[ 262], 99.90th=[ 305], 99.95th=[ 305], 00:12:28.933 | 99.99th=[ 305] 00:12:28.933 bw ( KiB/s): min= 2816, max=14592, per=1.06%, avg=8899.95, stdev=3357.58, samples=19 00:12:28.933 iops : min= 22, max= 114, avg=69.26, stdev=26.19, samples=19 00:12:28.933 lat (msec) : 10=17.89%, 20=20.34%, 50=8.64%, 100=31.27%, 250=21.56% 00:12:28.933 lat (msec) : 500=0.31% 00:12:28.933 cpu : usr=0.63%, sys=0.26%, ctx=2191, majf=0, minf=1 00:12:28.933 IO depths : 1=0.7%, 2=1.4%, 4=2.8%, 8=95.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:28.933 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:28.933 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:28.933 issued rwts: total=640,668,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:28.933 latency : target=0, window=0, percentile=100.00%, depth=8 00:12:28.933 job55: (groupid=0, jobs=1): err= 0: pid=71006: Tue Jul 23 02:07:37 2024 00:12:28.933 read: IOPS=77, BW=9956KiB/s (10.2MB/s)(80.0MiB/8228msec) 00:12:28.933 slat (usec): min=7, max=1568, avg=58.26, stdev=119.01 00:12:28.933 clat (msec): min=4, max=307, avg=24.02, stdev=36.64 00:12:28.933 lat (msec): min=4, max=307, avg=24.08, stdev=36.63 00:12:28.933 clat percentiles (msec): 00:12:28.933 | 1.00th=[ 6], 5.00th=[ 7], 10.00th=[ 7], 20.00th=[ 10], 00:12:28.933 | 30.00th=[ 12], 40.00th=[ 14], 50.00th=[ 16], 60.00th=[ 19], 00:12:28.933 | 70.00th=[ 20], 80.00th=[ 24], 90.00th=[ 35], 95.00th=[ 62], 00:12:28.933 | 99.00th=[ 230], 99.50th=[ 275], 99.90th=[ 309], 99.95th=[ 309], 00:12:28.933 | 99.99th=[ 309] 00:12:28.933 write: IOPS=80, BW=10.1MiB/s (10.6MB/s)(81.0MiB/8031msec); 0 zone resets 00:12:28.933 slat (usec): min=33, max=2211, avg=129.52, stdev=175.34 00:12:28.933 clat (msec): min=25, max=279, avg=98.39, stdev=39.88 00:12:28.933 lat (msec): min=25, max=279, avg=98.52, stdev=39.89 00:12:28.933 clat percentiles (msec): 00:12:28.933 | 1.00th=[ 31], 5.00th=[ 59], 10.00th=[ 60], 20.00th=[ 65], 00:12:28.933 | 30.00th=[ 71], 40.00th=[ 81], 50.00th=[ 89], 60.00th=[ 102], 00:12:28.933 | 70.00th=[ 112], 80.00th=[ 125], 90.00th=[ 148], 95.00th=[ 174], 00:12:28.933 | 99.00th=[ 234], 99.50th=[ 275], 99.90th=[ 279], 99.95th=[ 279], 00:12:28.933 | 99.99th=[ 279] 00:12:28.933 bw ( KiB/s): min= 2816, max=17152, per=1.09%, avg=9098.28, stdev=3886.34, samples=18 00:12:28.933 iops : min= 22, max= 134, avg=70.94, stdev=30.40, samples=18 00:12:28.933 lat (msec) : 10=11.80%, 20=23.29%, 50=11.88%, 100=30.59%, 250=21.66% 00:12:28.934 lat (msec) : 500=0.78% 00:12:28.934 cpu : usr=0.61%, sys=0.23%, ctx=2136, majf=0, minf=4 00:12:28.934 IO depths : 1=0.7%, 2=1.4%, 4=2.8%, 8=95.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:28.934 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:28.934 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:28.934 issued rwts: total=640,648,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:28.934 latency : target=0, window=0, percentile=100.00%, depth=8 00:12:28.934 job56: (groupid=0, jobs=1): err= 0: pid=71008: Tue Jul 23 02:07:37 2024 00:12:28.934 read: IOPS=76, BW=9810KiB/s (10.0MB/s)(80.0MiB/8351msec) 00:12:28.934 slat (usec): min=6, max=1016, avg=52.18, stdev=101.64 00:12:28.934 clat (usec): min=6451, max=46329, avg=15749.27, stdev=7273.68 00:12:28.934 lat (usec): min=6467, max=46351, avg=15801.45, stdev=7275.19 00:12:28.934 clat percentiles (usec): 00:12:28.934 | 1.00th=[ 6783], 5.00th=[ 7373], 10.00th=[ 8291], 20.00th=[ 9110], 00:12:28.934 | 30.00th=[10683], 40.00th=[12387], 50.00th=[14484], 60.00th=[16450], 00:12:28.934 | 70.00th=[19006], 80.00th=[20841], 90.00th=[24511], 95.00th=[28181], 00:12:28.934 | 99.00th=[42730], 99.50th=[45351], 99.90th=[46400], 99.95th=[46400], 00:12:28.934 | 99.99th=[46400] 00:12:28.934 write: IOPS=88, BW=11.1MiB/s (11.6MB/s)(97.4MiB/8765msec); 0 zone resets 00:12:28.934 slat (usec): min=34, max=8160, avg=160.91, stdev=367.61 00:12:28.934 clat (msec): min=9, max=288, avg=89.21, stdev=34.63 00:12:28.934 lat (msec): min=9, max=288, avg=89.37, stdev=34.62 00:12:28.934 clat percentiles (msec): 00:12:28.934 | 1.00th=[ 44], 5.00th=[ 59], 10.00th=[ 61], 20.00th=[ 65], 00:12:28.934 | 30.00th=[ 69], 40.00th=[ 73], 50.00th=[ 80], 60.00th=[ 85], 00:12:28.934 | 70.00th=[ 91], 80.00th=[ 105], 90.00th=[ 140], 95.00th=[ 169], 00:12:28.934 | 99.00th=[ 203], 99.50th=[ 228], 99.90th=[ 288], 99.95th=[ 288], 00:12:28.934 | 99.99th=[ 288] 00:12:28.934 bw ( KiB/s): min= 2560, max=15872, per=1.18%, avg=9865.10, stdev=4109.82, samples=20 00:12:28.934 iops : min= 20, max= 124, avg=76.95, stdev=32.05, samples=20 00:12:28.934 lat (msec) : 10=11.70%, 20=22.55%, 50=11.49%, 100=42.28%, 250=11.91% 00:12:28.934 lat (msec) : 500=0.07% 00:12:28.934 cpu : usr=0.71%, sys=0.25%, ctx=2274, majf=0, minf=3 00:12:28.934 IO depths : 1=0.6%, 2=1.3%, 4=2.5%, 8=95.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:28.934 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:28.934 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:28.934 issued rwts: total=640,779,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:28.934 latency : target=0, window=0, percentile=100.00%, depth=8 00:12:28.934 job57: (groupid=0, jobs=1): err= 0: pid=71014: Tue Jul 23 02:07:37 2024 00:12:28.934 read: IOPS=77, BW=9943KiB/s (10.2MB/s)(80.0MiB/8239msec) 00:12:28.934 slat (usec): min=6, max=1317, avg=64.67, stdev=120.59 00:12:28.934 clat (msec): min=4, max=103, avg=14.82, stdev=11.70 00:12:28.934 lat (msec): min=4, max=103, avg=14.88, stdev=11.70 00:12:28.934 clat percentiles (msec): 00:12:28.934 | 1.00th=[ 6], 5.00th=[ 6], 10.00th=[ 7], 20.00th=[ 8], 00:12:28.934 | 30.00th=[ 9], 40.00th=[ 10], 50.00th=[ 11], 60.00th=[ 14], 00:12:28.934 | 70.00th=[ 17], 80.00th=[ 21], 90.00th=[ 24], 95.00th=[ 35], 00:12:28.934 | 99.00th=[ 68], 99.50th=[ 84], 99.90th=[ 105], 99.95th=[ 105], 00:12:28.934 | 99.99th=[ 105] 00:12:28.934 write: IOPS=85, BW=10.7MiB/s (11.2MB/s)(94.2MiB/8832msec); 0 zone resets 00:12:28.934 slat (usec): min=32, max=3074, avg=141.17, stdev=212.72 00:12:28.934 clat (msec): min=46, max=274, avg=92.95, stdev=35.61 00:12:28.934 lat (msec): min=46, max=274, avg=93.09, stdev=35.62 00:12:28.934 clat percentiles (msec): 00:12:28.934 | 1.00th=[ 53], 5.00th=[ 59], 10.00th=[ 61], 20.00th=[ 65], 00:12:28.934 | 30.00th=[ 70], 40.00th=[ 74], 50.00th=[ 82], 60.00th=[ 90], 00:12:28.934 | 70.00th=[ 101], 80.00th=[ 120], 90.00th=[ 148], 95.00th=[ 167], 00:12:28.934 | 99.00th=[ 207], 99.50th=[ 243], 99.90th=[ 275], 99.95th=[ 275], 00:12:28.934 | 99.99th=[ 275] 00:12:28.934 bw ( KiB/s): min= 3072, max=15360, per=1.14%, avg=9541.25, stdev=3610.82, samples=20 00:12:28.934 iops : min= 24, max= 120, avg=74.45, stdev=28.28, samples=20 00:12:28.934 lat (msec) : 10=21.23%, 20=15.28%, 50=8.61%, 100=38.45%, 250=16.21% 00:12:28.934 lat (msec) : 500=0.22% 00:12:28.934 cpu : usr=0.65%, sys=0.22%, ctx=2342, majf=0, minf=3 00:12:28.934 IO depths : 1=0.6%, 2=1.3%, 4=2.6%, 8=95.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:28.934 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:28.934 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:28.934 issued rwts: total=640,754,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:28.934 latency : target=0, window=0, percentile=100.00%, depth=8 00:12:28.934 job58: (groupid=0, jobs=1): err= 0: pid=71015: Tue Jul 23 02:07:37 2024 00:12:28.934 read: IOPS=76, BW=9752KiB/s (9986kB/s)(80.0MiB/8400msec) 00:12:28.934 slat (usec): min=6, max=921, avg=55.14, stdev=113.08 00:12:28.934 clat (usec): min=5138, max=87854, avg=15502.28, stdev=9695.67 00:12:28.934 lat (usec): min=5173, max=87876, avg=15557.42, stdev=9694.38 00:12:28.934 clat percentiles (usec): 00:12:28.934 | 1.00th=[ 6915], 5.00th=[ 7439], 10.00th=[ 7898], 20.00th=[ 8848], 00:12:28.934 | 30.00th=[ 9634], 40.00th=[10945], 50.00th=[12387], 60.00th=[14746], 00:12:28.934 | 70.00th=[18744], 80.00th=[20841], 90.00th=[24773], 95.00th=[29230], 00:12:28.934 | 99.00th=[68682], 99.50th=[78119], 99.90th=[87557], 99.95th=[87557], 00:12:28.934 | 99.99th=[87557] 00:12:28.934 write: IOPS=86, BW=10.8MiB/s (11.3MB/s)(94.5MiB/8781msec); 0 zone resets 00:12:28.934 slat (usec): min=42, max=3348, avg=151.91, stdev=203.11 00:12:28.934 clat (msec): min=32, max=238, avg=92.18, stdev=34.10 00:12:28.934 lat (msec): min=32, max=238, avg=92.33, stdev=34.09 00:12:28.934 clat percentiles (msec): 00:12:28.934 | 1.00th=[ 39], 5.00th=[ 60], 10.00th=[ 61], 20.00th=[ 65], 00:12:28.934 | 30.00th=[ 70], 40.00th=[ 77], 50.00th=[ 83], 60.00th=[ 89], 00:12:28.934 | 70.00th=[ 99], 80.00th=[ 113], 90.00th=[ 144], 95.00th=[ 167], 00:12:28.934 | 99.00th=[ 203], 99.50th=[ 230], 99.90th=[ 239], 99.95th=[ 239], 00:12:28.934 | 99.99th=[ 239] 00:12:28.934 bw ( KiB/s): min= 2043, max=14848, per=1.15%, avg=9583.90, stdev=3898.19, samples=20 00:12:28.934 iops : min= 15, max= 116, avg=74.75, stdev=30.52, samples=20 00:12:28.934 lat (msec) : 10=14.61%, 20=20.20%, 50=11.10%, 100=38.83%, 250=15.26% 00:12:28.934 cpu : usr=0.61%, sys=0.36%, ctx=2247, majf=0, minf=3 00:12:28.934 IO depths : 1=0.6%, 2=1.3%, 4=2.6%, 8=95.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:28.934 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:28.934 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:28.934 issued rwts: total=640,756,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:28.934 latency : target=0, window=0, percentile=100.00%, depth=8 00:12:28.934 job59: (groupid=0, jobs=1): err= 0: pid=71016: Tue Jul 23 02:07:37 2024 00:12:28.934 read: IOPS=74, BW=9480KiB/s (9707kB/s)(74.6MiB/8061msec) 00:12:28.934 slat (usec): min=7, max=974, avg=68.51, stdev=120.94 00:12:28.934 clat (msec): min=3, max=589, avg=26.43, stdev=56.53 00:12:28.934 lat (msec): min=3, max=589, avg=26.50, stdev=56.54 00:12:28.934 clat percentiles (msec): 00:12:28.934 | 1.00th=[ 6], 5.00th=[ 7], 10.00th=[ 8], 20.00th=[ 10], 00:12:28.934 | 30.00th=[ 11], 40.00th=[ 13], 50.00th=[ 14], 60.00th=[ 17], 00:12:28.934 | 70.00th=[ 21], 80.00th=[ 26], 90.00th=[ 40], 95.00th=[ 51], 00:12:28.934 | 99.00th=[ 439], 99.50th=[ 489], 99.90th=[ 592], 99.95th=[ 592], 00:12:28.934 | 99.99th=[ 592] 00:12:28.934 write: IOPS=79, BW=9.98MiB/s (10.5MB/s)(80.0MiB/8016msec); 0 zone resets 00:12:28.934 slat (usec): min=31, max=3314, avg=129.07, stdev=194.18 00:12:28.934 clat (msec): min=55, max=421, avg=99.60, stdev=42.90 00:12:28.934 lat (msec): min=55, max=421, avg=99.73, stdev=42.91 00:12:28.934 clat percentiles (msec): 00:12:28.934 | 1.00th=[ 58], 5.00th=[ 61], 10.00th=[ 62], 20.00th=[ 70], 00:12:28.934 | 30.00th=[ 77], 40.00th=[ 83], 50.00th=[ 90], 60.00th=[ 99], 00:12:28.934 | 70.00th=[ 108], 80.00th=[ 120], 90.00th=[ 146], 95.00th=[ 174], 00:12:28.934 | 99.00th=[ 279], 99.50th=[ 347], 99.90th=[ 422], 99.95th=[ 422], 00:12:28.934 | 99.99th=[ 422] 00:12:28.934 bw ( KiB/s): min= 1277, max=14848, per=1.05%, avg=8760.72, stdev=4149.21, samples=18 00:12:28.934 iops : min= 9, max= 116, avg=68.39, stdev=32.52, samples=18 00:12:28.934 lat (msec) : 4=0.08%, 10=11.08%, 20=22.31%, 50=12.37%, 100=32.17% 00:12:28.934 lat (msec) : 250=20.37%, 500=1.46%, 750=0.16% 00:12:28.934 cpu : usr=0.59%, sys=0.19%, ctx=1949, majf=0, minf=5 00:12:28.934 IO depths : 1=0.6%, 2=1.3%, 4=2.6%, 8=95.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:28.934 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:28.934 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:28.934 issued rwts: total=597,640,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:28.934 latency : target=0, window=0, percentile=100.00%, depth=8 00:12:28.934 job60: (groupid=0, jobs=1): err= 0: pid=71017: Tue Jul 23 02:07:37 2024 00:12:28.934 read: IOPS=80, BW=10.1MiB/s (10.5MB/s)(80.0MiB/7955msec) 00:12:28.934 slat (usec): min=7, max=4193, avg=60.58, stdev=194.01 00:12:28.934 clat (usec): min=3426, max=99464, avg=12635.57, stdev=11804.31 00:12:28.934 lat (usec): min=3446, max=99492, avg=12696.14, stdev=11795.91 00:12:28.934 clat percentiles (usec): 00:12:28.934 | 1.00th=[ 4621], 5.00th=[ 5735], 10.00th=[ 6259], 20.00th=[ 7111], 00:12:28.934 | 30.00th=[ 8029], 40.00th=[ 8848], 50.00th=[10028], 60.00th=[10945], 00:12:28.934 | 70.00th=[11994], 80.00th=[13173], 90.00th=[18220], 95.00th=[27395], 00:12:28.934 | 99.00th=[80217], 99.50th=[89654], 99.90th=[99091], 99.95th=[99091], 00:12:28.934 | 99.99th=[99091] 00:12:28.934 write: IOPS=79, BW=9.92MiB/s (10.4MB/s)(89.5MiB/9020msec); 0 zone resets 00:12:28.934 slat (usec): min=38, max=1757, avg=131.14, stdev=158.38 00:12:28.934 clat (msec): min=53, max=295, avg=100.04, stdev=44.02 00:12:28.934 lat (msec): min=54, max=295, avg=100.17, stdev=44.03 00:12:28.934 clat percentiles (msec): 00:12:28.934 | 1.00th=[ 57], 5.00th=[ 59], 10.00th=[ 60], 20.00th=[ 65], 00:12:28.934 | 30.00th=[ 70], 40.00th=[ 78], 50.00th=[ 84], 60.00th=[ 93], 00:12:28.935 | 70.00th=[ 113], 80.00th=[ 133], 90.00th=[ 165], 95.00th=[ 190], 00:12:28.935 | 99.00th=[ 259], 99.50th=[ 268], 99.90th=[ 296], 99.95th=[ 296], 00:12:28.935 | 99.99th=[ 296] 00:12:28.935 bw ( KiB/s): min= 2782, max=15118, per=1.07%, avg=8984.58, stdev=3607.69, samples=19 00:12:28.935 iops : min= 21, max= 118, avg=69.68, stdev=28.25, samples=19 00:12:28.935 lat (msec) : 4=0.07%, 10=23.45%, 20=19.69%, 50=2.88%, 100=34.73% 00:12:28.935 lat (msec) : 250=18.51%, 500=0.66% 00:12:28.935 cpu : usr=0.52%, sys=0.33%, ctx=2209, majf=0, minf=1 00:12:28.935 IO depths : 1=0.7%, 2=1.3%, 4=2.7%, 8=95.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:28.935 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:28.935 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:28.935 issued rwts: total=640,716,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:28.935 latency : target=0, window=0, percentile=100.00%, depth=8 00:12:28.935 job61: (groupid=0, jobs=1): err= 0: pid=71018: Tue Jul 23 02:07:37 2024 00:12:28.935 read: IOPS=76, BW=9730KiB/s (9964kB/s)(80.0MiB/8419msec) 00:12:28.935 slat (usec): min=7, max=1049, avg=57.58, stdev=118.40 00:12:28.935 clat (msec): min=7, max=138, avg=18.39, stdev=15.13 00:12:28.935 lat (msec): min=7, max=138, avg=18.45, stdev=15.13 00:12:28.935 clat percentiles (msec): 00:12:28.935 | 1.00th=[ 8], 5.00th=[ 9], 10.00th=[ 11], 20.00th=[ 12], 00:12:28.935 | 30.00th=[ 13], 40.00th=[ 13], 50.00th=[ 14], 60.00th=[ 16], 00:12:28.935 | 70.00th=[ 17], 80.00th=[ 22], 90.00th=[ 28], 95.00th=[ 40], 00:12:28.935 | 99.00th=[ 95], 99.50th=[ 109], 99.90th=[ 138], 99.95th=[ 138], 00:12:28.935 | 99.99th=[ 138] 00:12:28.935 write: IOPS=91, BW=11.4MiB/s (12.0MB/s)(97.6MiB/8561msec); 0 zone resets 00:12:28.935 slat (usec): min=37, max=4263, avg=141.38, stdev=228.42 00:12:28.935 clat (msec): min=53, max=322, avg=86.81, stdev=37.88 00:12:28.935 lat (msec): min=53, max=322, avg=86.95, stdev=37.88 00:12:28.935 clat percentiles (msec): 00:12:28.935 | 1.00th=[ 56], 5.00th=[ 58], 10.00th=[ 60], 20.00th=[ 64], 00:12:28.935 | 30.00th=[ 67], 40.00th=[ 71], 50.00th=[ 75], 60.00th=[ 80], 00:12:28.935 | 70.00th=[ 88], 80.00th=[ 100], 90.00th=[ 126], 95.00th=[ 163], 00:12:28.935 | 99.00th=[ 268], 99.50th=[ 300], 99.90th=[ 321], 99.95th=[ 321], 00:12:28.935 | 99.99th=[ 321] 00:12:28.935 bw ( KiB/s): min= 1788, max=15104, per=1.18%, avg=9889.80, stdev=4429.15, samples=20 00:12:28.935 iops : min= 13, max= 118, avg=77.00, stdev=34.64, samples=20 00:12:28.935 lat (msec) : 10=4.22%, 20=31.32%, 50=7.60%, 100=45.74%, 250=10.49% 00:12:28.935 lat (msec) : 500=0.63% 00:12:28.935 cpu : usr=0.57%, sys=0.30%, ctx=2373, majf=0, minf=3 00:12:28.935 IO depths : 1=0.6%, 2=1.3%, 4=2.5%, 8=95.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:28.935 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:28.935 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:28.935 issued rwts: total=640,781,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:28.935 latency : target=0, window=0, percentile=100.00%, depth=8 00:12:28.935 job62: (groupid=0, jobs=1): err= 0: pid=71019: Tue Jul 23 02:07:37 2024 00:12:28.935 read: IOPS=71, BW=9129KiB/s (9348kB/s)(80.0MiB/8974msec) 00:12:28.935 slat (usec): min=5, max=1282, avg=49.77, stdev=99.82 00:12:28.935 clat (msec): min=3, max=115, avg=13.43, stdev=12.95 00:12:28.935 lat (msec): min=3, max=115, avg=13.48, stdev=12.94 00:12:28.935 clat percentiles (msec): 00:12:28.935 | 1.00th=[ 5], 5.00th=[ 6], 10.00th=[ 6], 20.00th=[ 7], 00:12:28.935 | 30.00th=[ 9], 40.00th=[ 9], 50.00th=[ 10], 60.00th=[ 12], 00:12:28.935 | 70.00th=[ 14], 80.00th=[ 17], 90.00th=[ 22], 95.00th=[ 31], 00:12:28.935 | 99.00th=[ 105], 99.50th=[ 112], 99.90th=[ 115], 99.95th=[ 115], 00:12:28.935 | 99.99th=[ 115] 00:12:28.935 write: IOPS=88, BW=11.1MiB/s (11.6MB/s)(99.9MiB/8995msec); 0 zone resets 00:12:28.935 slat (usec): min=37, max=13581, avg=145.05, stdev=499.06 00:12:28.935 clat (msec): min=4, max=340, avg=89.48, stdev=45.21 00:12:28.935 lat (msec): min=4, max=340, avg=89.63, stdev=45.27 00:12:28.935 clat percentiles (msec): 00:12:28.935 | 1.00th=[ 7], 5.00th=[ 52], 10.00th=[ 59], 20.00th=[ 62], 00:12:28.935 | 30.00th=[ 66], 40.00th=[ 71], 50.00th=[ 77], 60.00th=[ 82], 00:12:28.935 | 70.00th=[ 92], 80.00th=[ 117], 90.00th=[ 153], 95.00th=[ 184], 00:12:28.935 | 99.00th=[ 239], 99.50th=[ 288], 99.90th=[ 342], 99.95th=[ 342], 00:12:28.935 | 99.99th=[ 342] 00:12:28.935 bw ( KiB/s): min= 2304, max=23808, per=1.21%, avg=10123.90, stdev=5416.91, samples=20 00:12:28.935 iops : min= 18, max= 186, avg=79.05, stdev=42.31, samples=20 00:12:28.935 lat (msec) : 4=0.14%, 10=24.39%, 20=15.57%, 50=6.46%, 100=38.99% 00:12:28.935 lat (msec) : 250=14.11%, 500=0.35% 00:12:28.935 cpu : usr=0.59%, sys=0.26%, ctx=2348, majf=0, minf=7 00:12:28.935 IO depths : 1=0.6%, 2=1.3%, 4=2.5%, 8=95.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:28.935 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:28.935 complete : 0=0.0%, 4=99.4%, 8=0.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:28.935 issued rwts: total=640,799,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:28.935 latency : target=0, window=0, percentile=100.00%, depth=8 00:12:28.935 job63: (groupid=0, jobs=1): err= 0: pid=71020: Tue Jul 23 02:07:37 2024 00:12:28.935 read: IOPS=74, BW=9508KiB/s (9736kB/s)(80.0MiB/8616msec) 00:12:28.935 slat (usec): min=7, max=1093, avg=66.15, stdev=117.83 00:12:28.935 clat (usec): min=6678, max=46002, avg=19910.64, stdev=7430.76 00:12:28.935 lat (usec): min=6710, max=46016, avg=19976.78, stdev=7428.86 00:12:28.935 clat percentiles (usec): 00:12:28.935 | 1.00th=[ 8586], 5.00th=[11207], 10.00th=[12387], 20.00th=[13698], 00:12:28.935 | 30.00th=[14877], 40.00th=[16450], 50.00th=[18482], 60.00th=[20579], 00:12:28.935 | 70.00th=[22414], 80.00th=[24773], 90.00th=[31065], 95.00th=[34341], 00:12:28.935 | 99.00th=[43254], 99.50th=[44303], 99.90th=[45876], 99.95th=[45876], 00:12:28.935 | 99.99th=[45876] 00:12:28.935 write: IOPS=88, BW=11.1MiB/s (11.7MB/s)(93.9MiB/8447msec); 0 zone resets 00:12:28.935 slat (usec): min=31, max=3400, avg=136.37, stdev=231.87 00:12:28.935 clat (msec): min=31, max=320, avg=88.52, stdev=38.52 00:12:28.935 lat (msec): min=31, max=320, avg=88.65, stdev=38.52 00:12:28.935 clat percentiles (msec): 00:12:28.935 | 1.00th=[ 39], 5.00th=[ 58], 10.00th=[ 59], 20.00th=[ 64], 00:12:28.935 | 30.00th=[ 68], 40.00th=[ 71], 50.00th=[ 77], 60.00th=[ 84], 00:12:28.935 | 70.00th=[ 91], 80.00th=[ 105], 90.00th=[ 140], 95.00th=[ 178], 00:12:28.935 | 99.00th=[ 241], 99.50th=[ 268], 99.90th=[ 321], 99.95th=[ 321], 00:12:28.935 | 99.99th=[ 321] 00:12:28.935 bw ( KiB/s): min= 1024, max=17186, per=1.20%, avg=10013.84, stdev=4561.42, samples=19 00:12:28.935 iops : min= 8, max= 134, avg=78.00, stdev=35.69, samples=19 00:12:28.935 lat (msec) : 10=1.65%, 20=24.37%, 50=20.56%, 100=41.55%, 250=11.43% 00:12:28.935 lat (msec) : 500=0.43% 00:12:28.935 cpu : usr=0.63%, sys=0.27%, ctx=2316, majf=0, minf=1 00:12:28.935 IO depths : 1=0.6%, 2=1.3%, 4=2.6%, 8=95.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:28.935 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:28.935 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:28.935 issued rwts: total=640,751,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:28.935 latency : target=0, window=0, percentile=100.00%, depth=8 00:12:28.935 job64: (groupid=0, jobs=1): err= 0: pid=71021: Tue Jul 23 02:07:37 2024 00:12:28.935 read: IOPS=76, BW=9856KiB/s (10.1MB/s)(80.0MiB/8312msec) 00:12:28.935 slat (usec): min=7, max=5726, avg=72.09, stdev=257.36 00:12:28.935 clat (msec): min=5, max=110, avg=18.57, stdev=12.10 00:12:28.935 lat (msec): min=5, max=110, avg=18.65, stdev=12.10 00:12:28.935 clat percentiles (msec): 00:12:28.935 | 1.00th=[ 7], 5.00th=[ 9], 10.00th=[ 10], 20.00th=[ 12], 00:12:28.935 | 30.00th=[ 13], 40.00th=[ 15], 50.00th=[ 17], 60.00th=[ 18], 00:12:28.935 | 70.00th=[ 21], 80.00th=[ 23], 90.00th=[ 30], 95.00th=[ 34], 00:12:28.935 | 99.00th=[ 100], 99.50th=[ 105], 99.90th=[ 111], 99.95th=[ 111], 00:12:28.935 | 99.99th=[ 111] 00:12:28.935 write: IOPS=85, BW=10.7MiB/s (11.2MB/s)(89.8MiB/8426msec); 0 zone resets 00:12:28.935 slat (usec): min=38, max=1573, avg=127.74, stdev=145.23 00:12:28.935 clat (msec): min=46, max=389, avg=92.96, stdev=47.00 00:12:28.935 lat (msec): min=46, max=389, avg=93.09, stdev=47.00 00:12:28.935 clat percentiles (msec): 00:12:28.935 | 1.00th=[ 52], 5.00th=[ 58], 10.00th=[ 59], 20.00th=[ 62], 00:12:28.935 | 30.00th=[ 67], 40.00th=[ 71], 50.00th=[ 77], 60.00th=[ 84], 00:12:28.935 | 70.00th=[ 96], 80.00th=[ 113], 90.00th=[ 148], 95.00th=[ 178], 00:12:28.935 | 99.00th=[ 296], 99.50th=[ 334], 99.90th=[ 388], 99.95th=[ 388], 00:12:28.935 | 99.99th=[ 388] 00:12:28.935 bw ( KiB/s): min= 1792, max=15647, per=1.09%, avg=9094.90, stdev=4446.48, samples=20 00:12:28.935 iops : min= 14, max= 122, avg=70.95, stdev=34.70, samples=20 00:12:28.935 lat (msec) : 10=6.48%, 20=24.52%, 50=15.98%, 100=38.14%, 250=13.70% 00:12:28.935 lat (msec) : 500=1.18% 00:12:28.935 cpu : usr=0.61%, sys=0.29%, ctx=2085, majf=0, minf=3 00:12:28.935 IO depths : 1=0.7%, 2=1.3%, 4=2.7%, 8=95.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:28.935 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:28.935 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:28.935 issued rwts: total=640,718,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:28.935 latency : target=0, window=0, percentile=100.00%, depth=8 00:12:28.935 job65: (groupid=0, jobs=1): err= 0: pid=71022: Tue Jul 23 02:07:37 2024 00:12:28.935 read: IOPS=73, BW=9393KiB/s (9619kB/s)(80.0MiB/8721msec) 00:12:28.935 slat (usec): min=7, max=1021, avg=62.74, stdev=108.03 00:12:28.935 clat (msec): min=7, max=107, avg=20.79, stdev=13.48 00:12:28.935 lat (msec): min=7, max=107, avg=20.85, stdev=13.48 00:12:28.935 clat percentiles (msec): 00:12:28.935 | 1.00th=[ 9], 5.00th=[ 11], 10.00th=[ 11], 20.00th=[ 13], 00:12:28.935 | 30.00th=[ 15], 40.00th=[ 17], 50.00th=[ 19], 60.00th=[ 21], 00:12:28.935 | 70.00th=[ 22], 80.00th=[ 24], 90.00th=[ 29], 95.00th=[ 40], 00:12:28.935 | 99.00th=[ 96], 99.50th=[ 100], 99.90th=[ 108], 99.95th=[ 108], 00:12:28.935 | 99.99th=[ 108] 00:12:28.935 write: IOPS=94, BW=11.8MiB/s (12.4MB/s)(98.8MiB/8375msec); 0 zone resets 00:12:28.935 slat (usec): min=38, max=2252, avg=124.20, stdev=169.54 00:12:28.936 clat (msec): min=4, max=315, avg=84.22, stdev=40.54 00:12:28.936 lat (msec): min=4, max=315, avg=84.34, stdev=40.53 00:12:28.936 clat percentiles (msec): 00:12:28.936 | 1.00th=[ 12], 5.00th=[ 58], 10.00th=[ 60], 20.00th=[ 64], 00:12:28.936 | 30.00th=[ 67], 40.00th=[ 70], 50.00th=[ 74], 60.00th=[ 80], 00:12:28.936 | 70.00th=[ 86], 80.00th=[ 94], 90.00th=[ 116], 95.00th=[ 146], 00:12:28.936 | 99.00th=[ 288], 99.50th=[ 305], 99.90th=[ 317], 99.95th=[ 317], 00:12:28.936 | 99.99th=[ 317] 00:12:28.936 bw ( KiB/s): min= 509, max=18688, per=1.26%, avg=10534.58, stdev=5012.38, samples=19 00:12:28.936 iops : min= 3, max= 146, avg=82.21, stdev=39.23, samples=19 00:12:28.936 lat (msec) : 10=2.24%, 20=24.69%, 50=17.48%, 100=46.78%, 250=7.69% 00:12:28.936 lat (msec) : 500=1.12% 00:12:28.936 cpu : usr=0.57%, sys=0.32%, ctx=2314, majf=0, minf=1 00:12:28.936 IO depths : 1=0.6%, 2=1.3%, 4=2.5%, 8=95.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:28.936 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:28.936 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:28.936 issued rwts: total=640,790,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:28.936 latency : target=0, window=0, percentile=100.00%, depth=8 00:12:28.936 job66: (groupid=0, jobs=1): err= 0: pid=71023: Tue Jul 23 02:07:37 2024 00:12:28.936 read: IOPS=75, BW=9602KiB/s (9832kB/s)(80.0MiB/8532msec) 00:12:28.936 slat (usec): min=6, max=4075, avg=84.06, stdev=226.28 00:12:28.936 clat (usec): min=5483, max=96052, avg=20442.90, stdev=11739.75 00:12:28.936 lat (usec): min=5512, max=96066, avg=20526.96, stdev=11744.37 00:12:28.936 clat percentiles (usec): 00:12:28.936 | 1.00th=[ 8029], 5.00th=[ 9372], 10.00th=[10290], 20.00th=[12518], 00:12:28.936 | 30.00th=[13566], 40.00th=[15270], 50.00th=[17433], 60.00th=[20317], 00:12:28.936 | 70.00th=[22414], 80.00th=[25297], 90.00th=[33424], 95.00th=[40633], 00:12:28.936 | 99.00th=[87557], 99.50th=[87557], 99.90th=[95945], 99.95th=[95945], 00:12:28.936 | 99.99th=[95945] 00:12:28.936 write: IOPS=92, BW=11.6MiB/s (12.2MB/s)(97.6MiB/8402msec); 0 zone resets 00:12:28.936 slat (usec): min=38, max=2983, avg=140.70, stdev=214.40 00:12:28.936 clat (msec): min=25, max=298, avg=85.27, stdev=36.28 00:12:28.936 lat (msec): min=25, max=299, avg=85.41, stdev=36.27 00:12:28.936 clat percentiles (msec): 00:12:28.936 | 1.00th=[ 32], 5.00th=[ 59], 10.00th=[ 60], 20.00th=[ 64], 00:12:28.936 | 30.00th=[ 67], 40.00th=[ 71], 50.00th=[ 74], 60.00th=[ 81], 00:12:28.936 | 70.00th=[ 88], 80.00th=[ 96], 90.00th=[ 121], 95.00th=[ 153], 00:12:28.936 | 99.00th=[ 262], 99.50th=[ 279], 99.90th=[ 300], 99.95th=[ 300], 00:12:28.936 | 99.99th=[ 300] 00:12:28.936 bw ( KiB/s): min= 256, max=16384, per=1.18%, avg=9895.60, stdev=4839.17, samples=20 00:12:28.936 iops : min= 2, max= 128, avg=77.10, stdev=37.84, samples=20 00:12:28.936 lat (msec) : 10=3.94%, 20=22.38%, 50=18.58%, 100=45.67%, 250=8.80% 00:12:28.936 lat (msec) : 500=0.63% 00:12:28.936 cpu : usr=0.61%, sys=0.27%, ctx=2334, majf=0, minf=7 00:12:28.936 IO depths : 1=0.6%, 2=1.3%, 4=2.5%, 8=95.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:28.936 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:28.936 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:28.936 issued rwts: total=640,781,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:28.936 latency : target=0, window=0, percentile=100.00%, depth=8 00:12:28.936 job67: (groupid=0, jobs=1): err= 0: pid=71024: Tue Jul 23 02:07:37 2024 00:12:28.936 read: IOPS=80, BW=10.0MiB/s (10.5MB/s)(80.0MiB/7984msec) 00:12:28.936 slat (usec): min=6, max=1424, avg=56.77, stdev=109.00 00:12:28.936 clat (usec): min=4619, max=96749, avg=15770.79, stdev=13880.24 00:12:28.936 lat (usec): min=4858, max=96763, avg=15827.56, stdev=13874.60 00:12:28.936 clat percentiles (usec): 00:12:28.936 | 1.00th=[ 5080], 5.00th=[ 5800], 10.00th=[ 6849], 20.00th=[ 8586], 00:12:28.936 | 30.00th=[ 9765], 40.00th=[10945], 50.00th=[12125], 60.00th=[12911], 00:12:28.936 | 70.00th=[14484], 80.00th=[17171], 90.00th=[27395], 95.00th=[42730], 00:12:28.936 | 99.00th=[84411], 99.50th=[88605], 99.90th=[96994], 99.95th=[96994], 00:12:28.936 | 99.99th=[96994] 00:12:28.936 write: IOPS=77, BW=9978KiB/s (10.2MB/s)(85.6MiB/8787msec); 0 zone resets 00:12:28.936 slat (usec): min=42, max=7766, avg=172.04, stdev=435.30 00:12:28.936 clat (msec): min=16, max=390, avg=101.73, stdev=46.73 00:12:28.936 lat (msec): min=17, max=390, avg=101.90, stdev=46.72 00:12:28.936 clat percentiles (msec): 00:12:28.936 | 1.00th=[ 23], 5.00th=[ 58], 10.00th=[ 60], 20.00th=[ 65], 00:12:28.936 | 30.00th=[ 71], 40.00th=[ 81], 50.00th=[ 90], 60.00th=[ 101], 00:12:28.936 | 70.00th=[ 112], 80.00th=[ 133], 90.00th=[ 165], 95.00th=[ 188], 00:12:28.936 | 99.00th=[ 275], 99.50th=[ 338], 99.90th=[ 393], 99.95th=[ 393], 00:12:28.936 | 99.99th=[ 393] 00:12:28.936 bw ( KiB/s): min= 1792, max=16128, per=1.04%, avg=8676.00, stdev=3924.63, samples=20 00:12:28.936 iops : min= 14, max= 126, avg=67.70, stdev=30.63, samples=20 00:12:28.936 lat (msec) : 10=15.55%, 20=27.02%, 50=4.38%, 100=31.77%, 250=20.45% 00:12:28.936 lat (msec) : 500=0.83% 00:12:28.936 cpu : usr=0.58%, sys=0.28%, ctx=2149, majf=0, minf=1 00:12:28.936 IO depths : 1=0.7%, 2=1.4%, 4=2.7%, 8=95.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:28.936 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:28.936 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:28.936 issued rwts: total=640,685,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:28.936 latency : target=0, window=0, percentile=100.00%, depth=8 00:12:28.936 job68: (groupid=0, jobs=1): err= 0: pid=71025: Tue Jul 23 02:07:37 2024 00:12:28.936 read: IOPS=69, BW=8863KiB/s (9075kB/s)(68.4MiB/7900msec) 00:12:28.936 slat (usec): min=6, max=814, avg=50.86, stdev=86.92 00:12:28.936 clat (msec): min=4, max=272, avg=19.30, stdev=29.81 00:12:28.936 lat (msec): min=4, max=272, avg=19.35, stdev=29.80 00:12:28.936 clat percentiles (msec): 00:12:28.936 | 1.00th=[ 6], 5.00th=[ 7], 10.00th=[ 8], 20.00th=[ 9], 00:12:28.936 | 30.00th=[ 10], 40.00th=[ 11], 50.00th=[ 13], 60.00th=[ 16], 00:12:28.936 | 70.00th=[ 19], 80.00th=[ 22], 90.00th=[ 29], 95.00th=[ 40], 00:12:28.936 | 99.00th=[ 247], 99.50th=[ 271], 99.90th=[ 271], 99.95th=[ 271], 00:12:28.936 | 99.99th=[ 271] 00:12:28.936 write: IOPS=73, BW=9444KiB/s (9671kB/s)(80.0MiB/8674msec); 0 zone resets 00:12:28.936 slat (usec): min=42, max=4624, avg=154.87, stdev=294.25 00:12:28.936 clat (msec): min=52, max=352, avg=107.75, stdev=49.34 00:12:28.936 lat (msec): min=53, max=352, avg=107.90, stdev=49.33 00:12:28.936 clat percentiles (msec): 00:12:28.936 | 1.00th=[ 58], 5.00th=[ 59], 10.00th=[ 63], 20.00th=[ 69], 00:12:28.936 | 30.00th=[ 75], 40.00th=[ 83], 50.00th=[ 91], 60.00th=[ 104], 00:12:28.936 | 70.00th=[ 121], 80.00th=[ 146], 90.00th=[ 171], 95.00th=[ 194], 00:12:28.936 | 99.00th=[ 292], 99.50th=[ 334], 99.90th=[ 355], 99.95th=[ 355], 00:12:28.936 | 99.99th=[ 355] 00:12:28.936 bw ( KiB/s): min= 2048, max=16384, per=1.01%, avg=8484.89, stdev=3715.90, samples=19 00:12:28.936 iops : min= 16, max= 128, avg=66.05, stdev=29.17, samples=19 00:12:28.936 lat (msec) : 10=15.16%, 20=20.47%, 50=8.85%, 100=31.84%, 250=22.49% 00:12:28.936 lat (msec) : 500=1.18% 00:12:28.936 cpu : usr=0.48%, sys=0.27%, ctx=1953, majf=0, minf=5 00:12:28.936 IO depths : 1=0.7%, 2=1.3%, 4=2.7%, 8=95.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:28.936 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:28.936 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:28.936 issued rwts: total=547,640,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:28.936 latency : target=0, window=0, percentile=100.00%, depth=8 00:12:28.936 job69: (groupid=0, jobs=1): err= 0: pid=71026: Tue Jul 23 02:07:37 2024 00:12:28.936 read: IOPS=74, BW=9506KiB/s (9734kB/s)(81.1MiB/8739msec) 00:12:28.936 slat (usec): min=7, max=1125, avg=51.64, stdev=86.14 00:12:28.936 clat (msec): min=3, max=125, avg=16.86, stdev=15.74 00:12:28.936 lat (msec): min=3, max=125, avg=16.91, stdev=15.74 00:12:28.936 clat percentiles (msec): 00:12:28.936 | 1.00th=[ 5], 5.00th=[ 6], 10.00th=[ 6], 20.00th=[ 8], 00:12:28.936 | 30.00th=[ 9], 40.00th=[ 11], 50.00th=[ 13], 60.00th=[ 16], 00:12:28.936 | 70.00th=[ 18], 80.00th=[ 21], 90.00th=[ 29], 95.00th=[ 41], 00:12:28.936 | 99.00th=[ 90], 99.50th=[ 115], 99.90th=[ 126], 99.95th=[ 126], 00:12:28.936 | 99.99th=[ 126] 00:12:28.936 write: IOPS=92, BW=11.6MiB/s (12.1MB/s)(100MiB/8652msec); 0 zone resets 00:12:28.936 slat (usec): min=42, max=19809, avg=167.72, stdev=756.76 00:12:28.936 clat (msec): min=39, max=340, avg=85.78, stdev=37.36 00:12:28.936 lat (msec): min=40, max=341, avg=85.95, stdev=37.34 00:12:28.936 clat percentiles (msec): 00:12:28.936 | 1.00th=[ 46], 5.00th=[ 58], 10.00th=[ 59], 20.00th=[ 62], 00:12:28.936 | 30.00th=[ 65], 40.00th=[ 68], 50.00th=[ 73], 60.00th=[ 79], 00:12:28.936 | 70.00th=[ 87], 80.00th=[ 103], 90.00th=[ 138], 95.00th=[ 161], 00:12:28.936 | 99.00th=[ 234], 99.50th=[ 249], 99.90th=[ 342], 99.95th=[ 342], 00:12:28.936 | 99.99th=[ 342] 00:12:28.936 bw ( KiB/s): min= 1792, max=16384, per=1.22%, avg=10237.95, stdev=4769.18, samples=20 00:12:28.936 iops : min= 14, max= 128, avg=79.85, stdev=37.23, samples=20 00:12:28.936 lat (msec) : 4=0.14%, 10=16.08%, 20=18.56%, 50=8.83%, 100=44.51% 00:12:28.936 lat (msec) : 250=11.59%, 500=0.28% 00:12:28.936 cpu : usr=0.57%, sys=0.29%, ctx=2366, majf=0, minf=3 00:12:28.936 IO depths : 1=0.7%, 2=1.4%, 4=2.8%, 8=95.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:28.936 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:28.936 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:28.936 issued rwts: total=649,800,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:28.936 latency : target=0, window=0, percentile=100.00%, depth=8 00:12:28.936 job70: (groupid=0, jobs=1): err= 0: pid=71027: Tue Jul 23 02:07:37 2024 00:12:28.936 read: IOPS=63, BW=8086KiB/s (8280kB/s)(60.0MiB/7598msec) 00:12:28.936 slat (usec): min=6, max=824, avg=53.36, stdev=86.18 00:12:28.937 clat (msec): min=7, max=108, avg=26.39, stdev=16.25 00:12:28.937 lat (msec): min=7, max=108, avg=26.45, stdev=16.25 00:12:28.937 clat percentiles (msec): 00:12:28.937 | 1.00th=[ 11], 5.00th=[ 13], 10.00th=[ 15], 20.00th=[ 18], 00:12:28.937 | 30.00th=[ 20], 40.00th=[ 21], 50.00th=[ 22], 60.00th=[ 24], 00:12:28.937 | 70.00th=[ 26], 80.00th=[ 31], 90.00th=[ 42], 95.00th=[ 60], 00:12:28.937 | 99.00th=[ 101], 99.50th=[ 103], 99.90th=[ 109], 99.95th=[ 109], 00:12:28.937 | 99.99th=[ 109] 00:12:28.937 write: IOPS=61, BW=7857KiB/s (8046kB/s)(65.0MiB/8471msec); 0 zone resets 00:12:28.937 slat (usec): min=46, max=2690, avg=171.17, stdev=254.88 00:12:28.937 clat (msec): min=70, max=525, avg=128.71, stdev=60.05 00:12:28.937 lat (msec): min=70, max=525, avg=128.88, stdev=60.05 00:12:28.937 clat percentiles (msec): 00:12:28.937 | 1.00th=[ 85], 5.00th=[ 86], 10.00th=[ 88], 20.00th=[ 91], 00:12:28.937 | 30.00th=[ 96], 40.00th=[ 102], 50.00th=[ 108], 60.00th=[ 114], 00:12:28.937 | 70.00th=[ 127], 80.00th=[ 155], 90.00th=[ 199], 95.00th=[ 251], 00:12:28.937 | 99.00th=[ 363], 99.50th=[ 464], 99.90th=[ 527], 99.95th=[ 527], 00:12:28.937 | 99.99th=[ 527] 00:12:28.937 bw ( KiB/s): min= 254, max=10920, per=0.81%, avg=6778.06, stdev=3312.94, samples=18 00:12:28.937 iops : min= 1, max= 85, avg=52.44, stdev=26.12, samples=18 00:12:28.937 lat (msec) : 10=0.30%, 20=17.70%, 50=25.90%, 100=22.30%, 250=31.20% 00:12:28.937 lat (msec) : 500=2.50%, 750=0.10% 00:12:28.937 cpu : usr=0.45%, sys=0.22%, ctx=1711, majf=0, minf=5 00:12:28.937 IO depths : 1=0.7%, 2=1.4%, 4=2.8%, 8=95.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:28.937 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:28.937 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:28.937 issued rwts: total=480,520,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:28.937 latency : target=0, window=0, percentile=100.00%, depth=8 00:12:28.937 job71: (groupid=0, jobs=1): err= 0: pid=71028: Tue Jul 23 02:07:37 2024 00:12:28.937 read: IOPS=45, BW=5782KiB/s (5921kB/s)(40.0MiB/7084msec) 00:12:28.937 slat (usec): min=7, max=1140, avg=49.69, stdev=88.07 00:12:28.937 clat (msec): min=7, max=175, avg=22.20, stdev=23.61 00:12:28.937 lat (msec): min=7, max=175, avg=22.25, stdev=23.61 00:12:28.937 clat percentiles (msec): 00:12:28.937 | 1.00th=[ 9], 5.00th=[ 11], 10.00th=[ 12], 20.00th=[ 14], 00:12:28.937 | 30.00th=[ 14], 40.00th=[ 16], 50.00th=[ 16], 60.00th=[ 18], 00:12:28.937 | 70.00th=[ 22], 80.00th=[ 24], 90.00th=[ 33], 95.00th=[ 42], 00:12:28.937 | 99.00th=[ 167], 99.50th=[ 171], 99.90th=[ 176], 99.95th=[ 176], 00:12:28.937 | 99.99th=[ 176] 00:12:28.937 write: IOPS=52, BW=6708KiB/s (6869kB/s)(60.0MiB/9159msec); 0 zone resets 00:12:28.937 slat (usec): min=38, max=6144, avg=176.99, stdev=366.78 00:12:28.937 clat (msec): min=47, max=576, avg=151.41, stdev=69.29 00:12:28.937 lat (msec): min=48, max=577, avg=151.59, stdev=69.31 00:12:28.937 clat percentiles (msec): 00:12:28.937 | 1.00th=[ 54], 5.00th=[ 87], 10.00th=[ 88], 20.00th=[ 95], 00:12:28.937 | 30.00th=[ 106], 40.00th=[ 117], 50.00th=[ 131], 60.00th=[ 155], 00:12:28.937 | 70.00th=[ 174], 80.00th=[ 201], 90.00th=[ 234], 95.00th=[ 279], 00:12:28.937 | 99.00th=[ 405], 99.50th=[ 468], 99.90th=[ 575], 99.95th=[ 575], 00:12:28.937 | 99.99th=[ 575] 00:12:28.937 bw ( KiB/s): min= 256, max=11030, per=0.72%, avg=6047.60, stdev=2945.76, samples=20 00:12:28.937 iops : min= 2, max= 86, avg=47.05, stdev=23.02, samples=20 00:12:28.937 lat (msec) : 10=1.38%, 20=25.38%, 50=11.88%, 100=15.75%, 250=40.50% 00:12:28.937 lat (msec) : 500=4.88%, 750=0.25% 00:12:28.937 cpu : usr=0.34%, sys=0.20%, ctx=1325, majf=0, minf=5 00:12:28.937 IO depths : 1=0.6%, 2=1.2%, 4=2.5%, 8=95.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:28.937 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:28.937 complete : 0=0.0%, 4=99.4%, 8=0.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:28.937 issued rwts: total=320,480,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:28.937 latency : target=0, window=0, percentile=100.00%, depth=8 00:12:28.937 job72: (groupid=0, jobs=1): err= 0: pid=71029: Tue Jul 23 02:07:37 2024 00:12:28.937 read: IOPS=58, BW=7478KiB/s (7658kB/s)(60.0MiB/8216msec) 00:12:28.937 slat (usec): min=5, max=1184, avg=56.12, stdev=117.00 00:12:28.937 clat (msec): min=7, max=112, avg=22.56, stdev=13.95 00:12:28.937 lat (msec): min=7, max=112, avg=22.62, stdev=13.95 00:12:28.937 clat percentiles (msec): 00:12:28.937 | 1.00th=[ 12], 5.00th=[ 13], 10.00th=[ 15], 20.00th=[ 16], 00:12:28.937 | 30.00th=[ 17], 40.00th=[ 18], 50.00th=[ 19], 60.00th=[ 21], 00:12:28.937 | 70.00th=[ 23], 80.00th=[ 25], 90.00th=[ 29], 95.00th=[ 45], 00:12:28.937 | 99.00th=[ 99], 99.50th=[ 101], 99.90th=[ 113], 99.95th=[ 113], 00:12:28.937 | 99.99th=[ 113] 00:12:28.937 write: IOPS=62, BW=8003KiB/s (8195kB/s)(68.1MiB/8717msec); 0 zone resets 00:12:28.937 slat (usec): min=33, max=8286, avg=140.46, stdev=381.02 00:12:28.937 clat (msec): min=23, max=574, avg=126.82, stdev=66.66 00:12:28.937 lat (msec): min=25, max=574, avg=126.96, stdev=66.66 00:12:28.937 clat percentiles (msec): 00:12:28.937 | 1.00th=[ 31], 5.00th=[ 86], 10.00th=[ 87], 20.00th=[ 89], 00:12:28.937 | 30.00th=[ 94], 40.00th=[ 99], 50.00th=[ 105], 60.00th=[ 113], 00:12:28.937 | 70.00th=[ 128], 80.00th=[ 148], 90.00th=[ 186], 95.00th=[ 257], 00:12:28.937 | 99.00th=[ 439], 99.50th=[ 498], 99.90th=[ 575], 99.95th=[ 575], 00:12:28.937 | 99.99th=[ 575] 00:12:28.937 bw ( KiB/s): min= 510, max=11776, per=0.91%, avg=7651.67, stdev=3042.17, samples=18 00:12:28.937 iops : min= 3, max= 92, avg=59.61, stdev=24.01, samples=18 00:12:28.937 lat (msec) : 10=0.10%, 20=27.32%, 50=18.34%, 100=22.93%, 250=28.29% 00:12:28.937 lat (msec) : 500=2.83%, 750=0.20% 00:12:28.937 cpu : usr=0.39%, sys=0.25%, ctx=1550, majf=0, minf=3 00:12:28.937 IO depths : 1=0.7%, 2=1.4%, 4=2.7%, 8=95.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:28.937 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:28.937 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:28.937 issued rwts: total=480,545,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:28.937 latency : target=0, window=0, percentile=100.00%, depth=8 00:12:28.937 job73: (groupid=0, jobs=1): err= 0: pid=71030: Tue Jul 23 02:07:37 2024 00:12:28.937 read: IOPS=59, BW=7571KiB/s (7753kB/s)(60.0MiB/8115msec) 00:12:28.937 slat (usec): min=7, max=1060, avg=72.02, stdev=141.22 00:12:28.937 clat (msec): min=13, max=103, avg=30.66, stdev=13.16 00:12:28.937 lat (msec): min=14, max=103, avg=30.74, stdev=13.17 00:12:28.937 clat percentiles (msec): 00:12:28.937 | 1.00th=[ 16], 5.00th=[ 18], 10.00th=[ 19], 20.00th=[ 20], 00:12:28.937 | 30.00th=[ 22], 40.00th=[ 25], 50.00th=[ 29], 60.00th=[ 32], 00:12:28.937 | 70.00th=[ 35], 80.00th=[ 39], 90.00th=[ 45], 95.00th=[ 51], 00:12:28.937 | 99.00th=[ 95], 99.50th=[ 96], 99.90th=[ 104], 99.95th=[ 104], 00:12:28.937 | 99.99th=[ 104] 00:12:28.937 write: IOPS=65, BW=8434KiB/s (8636kB/s)(67.8MiB/8226msec); 0 zone resets 00:12:28.937 slat (usec): min=38, max=1837, avg=149.46, stdev=222.37 00:12:28.937 clat (msec): min=11, max=365, avg=120.02, stdev=52.65 00:12:28.937 lat (msec): min=11, max=365, avg=120.17, stdev=52.67 00:12:28.937 clat percentiles (msec): 00:12:28.937 | 1.00th=[ 21], 5.00th=[ 85], 10.00th=[ 86], 20.00th=[ 89], 00:12:28.937 | 30.00th=[ 93], 40.00th=[ 99], 50.00th=[ 105], 60.00th=[ 112], 00:12:28.937 | 70.00th=[ 121], 80.00th=[ 136], 90.00th=[ 178], 95.00th=[ 249], 00:12:28.937 | 99.00th=[ 334], 99.50th=[ 359], 99.90th=[ 368], 99.95th=[ 368], 00:12:28.937 | 99.99th=[ 368] 00:12:28.937 bw ( KiB/s): min= 512, max=12032, per=0.91%, avg=7592.61, stdev=3286.62, samples=18 00:12:28.937 iops : min= 4, max= 94, avg=59.17, stdev=25.70, samples=18 00:12:28.937 lat (msec) : 20=10.86%, 50=34.74%, 100=24.17%, 250=27.59%, 500=2.64% 00:12:28.937 cpu : usr=0.43%, sys=0.25%, ctx=1611, majf=0, minf=1 00:12:28.937 IO depths : 1=0.7%, 2=1.4%, 4=2.7%, 8=95.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:28.937 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:28.937 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:28.937 issued rwts: total=480,542,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:28.937 latency : target=0, window=0, percentile=100.00%, depth=8 00:12:28.937 job74: (groupid=0, jobs=1): err= 0: pid=71031: Tue Jul 23 02:07:37 2024 00:12:28.937 read: IOPS=67, BW=8635KiB/s (8843kB/s)(60.0MiB/7115msec) 00:12:28.937 slat (usec): min=7, max=1966, avg=67.27, stdev=172.30 00:12:28.937 clat (msec): min=6, max=142, avg=18.91, stdev=17.52 00:12:28.937 lat (msec): min=6, max=142, avg=18.98, stdev=17.52 00:12:28.937 clat percentiles (msec): 00:12:28.937 | 1.00th=[ 7], 5.00th=[ 8], 10.00th=[ 8], 20.00th=[ 11], 00:12:28.937 | 30.00th=[ 13], 40.00th=[ 14], 50.00th=[ 15], 60.00th=[ 17], 00:12:28.937 | 70.00th=[ 18], 80.00th=[ 21], 90.00th=[ 31], 95.00th=[ 48], 00:12:28.937 | 99.00th=[ 125], 99.50th=[ 136], 99.90th=[ 142], 99.95th=[ 142], 00:12:28.937 | 99.99th=[ 142] 00:12:28.937 write: IOPS=55, BW=7110KiB/s (7281kB/s)(62.1MiB/8947msec); 0 zone resets 00:12:28.937 slat (usec): min=42, max=13184, avg=198.57, stdev=629.01 00:12:28.937 clat (msec): min=3, max=531, avg=142.34, stdev=76.42 00:12:28.937 lat (msec): min=3, max=532, avg=142.53, stdev=76.40 00:12:28.938 clat percentiles (msec): 00:12:28.938 | 1.00th=[ 16], 5.00th=[ 85], 10.00th=[ 87], 20.00th=[ 89], 00:12:28.938 | 30.00th=[ 94], 40.00th=[ 101], 50.00th=[ 114], 60.00th=[ 128], 00:12:28.938 | 70.00th=[ 163], 80.00th=[ 194], 90.00th=[ 251], 95.00th=[ 309], 00:12:28.938 | 99.00th=[ 376], 99.50th=[ 518], 99.90th=[ 531], 99.95th=[ 531], 00:12:28.938 | 99.99th=[ 531] 00:12:28.938 bw ( KiB/s): min= 2048, max=12056, per=0.79%, avg=6588.79, stdev=3176.71, samples=19 00:12:28.938 iops : min= 16, max= 94, avg=51.32, stdev=24.97, samples=19 00:12:28.938 lat (msec) : 4=0.10%, 10=9.11%, 20=30.40%, 50=8.90%, 100=19.96% 00:12:28.938 lat (msec) : 250=26.41%, 500=4.81%, 750=0.31% 00:12:28.938 cpu : usr=0.49%, sys=0.19%, ctx=1575, majf=0, minf=3 00:12:28.938 IO depths : 1=0.7%, 2=1.4%, 4=2.9%, 8=95.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:28.938 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:28.938 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:28.938 issued rwts: total=480,497,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:28.938 latency : target=0, window=0, percentile=100.00%, depth=8 00:12:28.938 job75: (groupid=0, jobs=1): err= 0: pid=71032: Tue Jul 23 02:07:37 2024 00:12:28.938 read: IOPS=56, BW=7291KiB/s (7466kB/s)(49.1MiB/6899msec) 00:12:28.938 slat (usec): min=6, max=812, avg=42.42, stdev=71.78 00:12:28.938 clat (msec): min=7, max=319, avg=24.26, stdev=41.20 00:12:28.938 lat (msec): min=7, max=319, avg=24.30, stdev=41.20 00:12:28.938 clat percentiles (msec): 00:12:28.938 | 1.00th=[ 9], 5.00th=[ 9], 10.00th=[ 10], 20.00th=[ 11], 00:12:28.938 | 30.00th=[ 12], 40.00th=[ 14], 50.00th=[ 16], 60.00th=[ 17], 00:12:28.938 | 70.00th=[ 20], 80.00th=[ 23], 90.00th=[ 32], 95.00th=[ 63], 00:12:28.938 | 99.00th=[ 317], 99.50th=[ 317], 99.90th=[ 321], 99.95th=[ 321], 00:12:28.938 | 99.99th=[ 321] 00:12:28.938 write: IOPS=54, BW=6972KiB/s (7140kB/s)(60.0MiB/8812msec); 0 zone resets 00:12:28.938 slat (usec): min=31, max=2303, avg=173.00, stdev=245.99 00:12:28.938 clat (msec): min=82, max=406, avg=145.86, stdev=62.95 00:12:28.938 lat (msec): min=82, max=406, avg=146.03, stdev=62.96 00:12:28.938 clat percentiles (msec): 00:12:28.938 | 1.00th=[ 85], 5.00th=[ 87], 10.00th=[ 90], 20.00th=[ 95], 00:12:28.938 | 30.00th=[ 101], 40.00th=[ 107], 50.00th=[ 115], 60.00th=[ 144], 00:12:28.938 | 70.00th=[ 167], 80.00th=[ 199], 90.00th=[ 243], 95.00th=[ 284], 00:12:28.938 | 99.00th=[ 317], 99.50th=[ 330], 99.90th=[ 405], 99.95th=[ 405], 00:12:28.938 | 99.99th=[ 405] 00:12:28.938 bw ( KiB/s): min= 2544, max=10666, per=0.77%, avg=6407.05, stdev=2800.79, samples=19 00:12:28.938 iops : min= 19, max= 83, avg=49.47, stdev=22.03, samples=19 00:12:28.938 lat (msec) : 10=8.25%, 20=24.40%, 50=9.16%, 100=17.75%, 250=34.94% 00:12:28.938 lat (msec) : 500=5.50% 00:12:28.938 cpu : usr=0.47%, sys=0.14%, ctx=1419, majf=0, minf=7 00:12:28.938 IO depths : 1=0.7%, 2=1.4%, 4=2.7%, 8=95.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:28.938 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:28.938 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:28.938 issued rwts: total=393,480,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:28.938 latency : target=0, window=0, percentile=100.00%, depth=8 00:12:28.938 job76: (groupid=0, jobs=1): err= 0: pid=71033: Tue Jul 23 02:07:37 2024 00:12:28.938 read: IOPS=47, BW=6102KiB/s (6248kB/s)(40.0MiB/6713msec) 00:12:28.938 slat (usec): min=6, max=1323, avg=58.74, stdev=125.20 00:12:28.938 clat (msec): min=7, max=336, avg=44.13, stdev=64.23 00:12:28.938 lat (msec): min=7, max=336, avg=44.19, stdev=64.23 00:12:28.938 clat percentiles (msec): 00:12:28.938 | 1.00th=[ 8], 5.00th=[ 9], 10.00th=[ 12], 20.00th=[ 16], 00:12:28.938 | 30.00th=[ 17], 40.00th=[ 19], 50.00th=[ 22], 60.00th=[ 24], 00:12:28.938 | 70.00th=[ 31], 80.00th=[ 38], 90.00th=[ 115], 95.00th=[ 220], 00:12:28.938 | 99.00th=[ 296], 99.50th=[ 330], 99.90th=[ 338], 99.95th=[ 338], 00:12:28.938 | 99.99th=[ 338] 00:12:28.938 write: IOPS=48, BW=6239KiB/s (6389kB/s)(50.5MiB/8288msec); 0 zone resets 00:12:28.938 slat (usec): min=44, max=1810, avg=148.86, stdev=171.57 00:12:28.938 clat (msec): min=77, max=404, avg=163.28, stdev=69.47 00:12:28.938 lat (msec): min=77, max=404, avg=163.43, stdev=69.47 00:12:28.938 clat percentiles (msec): 00:12:28.938 | 1.00th=[ 86], 5.00th=[ 92], 10.00th=[ 94], 20.00th=[ 103], 00:12:28.938 | 30.00th=[ 111], 40.00th=[ 123], 50.00th=[ 142], 60.00th=[ 167], 00:12:28.938 | 70.00th=[ 192], 80.00th=[ 226], 90.00th=[ 268], 95.00th=[ 305], 00:12:28.938 | 99.00th=[ 347], 99.50th=[ 380], 99.90th=[ 405], 99.95th=[ 405], 00:12:28.938 | 99.99th=[ 405] 00:12:28.938 bw ( KiB/s): min= 1015, max= 9944, per=0.65%, avg=5449.24, stdev=2693.84, samples=17 00:12:28.938 iops : min= 7, max= 77, avg=41.94, stdev=21.20, samples=17 00:12:28.938 lat (msec) : 10=3.31%, 20=16.02%, 50=18.37%, 100=10.77%, 250=42.96% 00:12:28.938 lat (msec) : 500=8.56% 00:12:28.938 cpu : usr=0.36%, sys=0.15%, ctx=1214, majf=0, minf=5 00:12:28.938 IO depths : 1=0.7%, 2=1.4%, 4=2.8%, 8=95.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:28.938 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:28.938 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:28.938 issued rwts: total=320,404,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:28.938 latency : target=0, window=0, percentile=100.00%, depth=8 00:12:28.938 job77: (groupid=0, jobs=1): err= 0: pid=71034: Tue Jul 23 02:07:37 2024 00:12:28.938 read: IOPS=62, BW=7966KiB/s (8157kB/s)(60.0MiB/7713msec) 00:12:28.938 slat (usec): min=7, max=1980, avg=78.58, stdev=172.03 00:12:28.938 clat (usec): min=15135, max=63797, avg=27002.11, stdev=9721.50 00:12:28.938 lat (usec): min=15152, max=63808, avg=27080.69, stdev=9710.15 00:12:28.938 clat percentiles (usec): 00:12:28.938 | 1.00th=[16319], 5.00th=[17695], 10.00th=[18482], 20.00th=[19530], 00:12:28.938 | 30.00th=[20841], 40.00th=[22414], 50.00th=[23725], 60.00th=[25035], 00:12:28.938 | 70.00th=[28443], 80.00th=[32900], 90.00th=[41157], 95.00th=[49546], 00:12:28.938 | 99.00th=[58983], 99.50th=[62129], 99.90th=[63701], 99.95th=[63701], 00:12:28.938 | 99.99th=[63701] 00:12:28.938 write: IOPS=62, BW=8042KiB/s (8235kB/s)(66.4MiB/8452msec); 0 zone resets 00:12:28.938 slat (usec): min=37, max=7307, avg=157.76, stdev=353.29 00:12:28.938 clat (msec): min=40, max=514, avg=125.60, stdev=61.46 00:12:28.938 lat (msec): min=41, max=514, avg=125.75, stdev=61.43 00:12:28.938 clat percentiles (msec): 00:12:28.938 | 1.00th=[ 47], 5.00th=[ 86], 10.00th=[ 87], 20.00th=[ 90], 00:12:28.938 | 30.00th=[ 95], 40.00th=[ 101], 50.00th=[ 107], 60.00th=[ 113], 00:12:28.938 | 70.00th=[ 124], 80.00th=[ 144], 90.00th=[ 184], 95.00th=[ 241], 00:12:28.938 | 99.00th=[ 388], 99.50th=[ 477], 99.90th=[ 514], 99.95th=[ 514], 00:12:28.938 | 99.99th=[ 514] 00:12:28.938 bw ( KiB/s): min= 1792, max=12032, per=0.84%, avg=7045.21, stdev=3355.56, samples=19 00:12:28.938 iops : min= 14, max= 94, avg=54.95, stdev=26.23, samples=19 00:12:28.938 lat (msec) : 20=10.48%, 50=35.41%, 100=22.95%, 250=28.78%, 500=2.18% 00:12:28.938 lat (msec) : 750=0.20% 00:12:28.938 cpu : usr=0.45%, sys=0.25%, ctx=1620, majf=0, minf=3 00:12:28.938 IO depths : 1=0.7%, 2=1.4%, 4=2.8%, 8=95.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:28.938 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:28.938 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:28.938 issued rwts: total=480,531,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:28.938 latency : target=0, window=0, percentile=100.00%, depth=8 00:12:28.938 job78: (groupid=0, jobs=1): err= 0: pid=71035: Tue Jul 23 02:07:37 2024 00:12:28.938 read: IOPS=59, BW=7563KiB/s (7744kB/s)(60.0MiB/8124msec) 00:12:28.938 slat (usec): min=6, max=782, avg=68.53, stdev=122.50 00:12:28.938 clat (msec): min=8, max=102, avg=28.78, stdev=16.27 00:12:28.938 lat (msec): min=8, max=102, avg=28.85, stdev=16.27 00:12:28.938 clat percentiles (msec): 00:12:28.938 | 1.00th=[ 11], 5.00th=[ 13], 10.00th=[ 15], 20.00th=[ 17], 00:12:28.938 | 30.00th=[ 18], 40.00th=[ 21], 50.00th=[ 24], 60.00th=[ 28], 00:12:28.938 | 70.00th=[ 31], 80.00th=[ 41], 90.00th=[ 53], 95.00th=[ 65], 00:12:28.938 | 99.00th=[ 87], 99.50th=[ 88], 99.90th=[ 103], 99.95th=[ 103], 00:12:28.938 | 99.99th=[ 103] 00:12:28.938 write: IOPS=65, BW=8425KiB/s (8627kB/s)(68.6MiB/8341msec); 0 zone resets 00:12:28.938 slat (usec): min=41, max=1622, avg=140.28, stdev=160.14 00:12:28.938 clat (msec): min=31, max=415, avg=120.62, stdev=50.20 00:12:28.938 lat (msec): min=31, max=415, avg=120.76, stdev=50.21 00:12:28.938 clat percentiles (msec): 00:12:28.938 | 1.00th=[ 39], 5.00th=[ 85], 10.00th=[ 86], 20.00th=[ 87], 00:12:28.938 | 30.00th=[ 93], 40.00th=[ 99], 50.00th=[ 106], 60.00th=[ 112], 00:12:28.938 | 70.00th=[ 124], 80.00th=[ 144], 90.00th=[ 180], 95.00th=[ 222], 00:12:28.938 | 99.00th=[ 347], 99.50th=[ 405], 99.90th=[ 418], 99.95th=[ 418], 00:12:28.938 | 99.99th=[ 418] 00:12:28.938 bw ( KiB/s): min= 512, max=11520, per=0.87%, avg=7271.58, stdev=3545.45, samples=19 00:12:28.938 iops : min= 4, max= 90, avg=56.68, stdev=27.67, samples=19 00:12:28.938 lat (msec) : 10=0.29%, 20=17.78%, 50=24.10%, 100=27.60%, 250=29.15% 00:12:28.938 lat (msec) : 500=1.07% 00:12:28.938 cpu : usr=0.52%, sys=0.17%, ctx=1732, majf=0, minf=3 00:12:28.938 IO depths : 1=0.7%, 2=1.4%, 4=2.7%, 8=95.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:28.938 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:28.938 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:28.938 issued rwts: total=480,549,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:28.938 latency : target=0, window=0, percentile=100.00%, depth=8 00:12:28.938 job79: (groupid=0, jobs=1): err= 0: pid=71036: Tue Jul 23 02:07:37 2024 00:12:28.938 read: IOPS=60, BW=7777KiB/s (7964kB/s)(60.0MiB/7900msec) 00:12:28.938 slat (usec): min=7, max=1736, avg=57.79, stdev=118.79 00:12:28.938 clat (usec): min=15727, max=97232, avg=28680.35, stdev=12457.03 00:12:28.938 lat (usec): min=15746, max=97260, avg=28738.14, stdev=12463.44 00:12:28.938 clat percentiles (usec): 00:12:28.938 | 1.00th=[16057], 5.00th=[16581], 10.00th=[17433], 20.00th=[18744], 00:12:28.938 | 30.00th=[20055], 40.00th=[22676], 50.00th=[25822], 60.00th=[28443], 00:12:28.938 | 70.00th=[31327], 80.00th=[34866], 90.00th=[45351], 95.00th=[56361], 00:12:28.938 | 99.00th=[74974], 99.50th=[77071], 99.90th=[96994], 99.95th=[96994], 00:12:28.939 | 99.99th=[96994] 00:12:28.939 write: IOPS=64, BW=8275KiB/s (8474kB/s)(67.4MiB/8337msec); 0 zone resets 00:12:28.939 slat (usec): min=38, max=3133, avg=146.17, stdev=222.55 00:12:28.939 clat (msec): min=69, max=406, avg=122.23, stdev=51.66 00:12:28.939 lat (msec): min=69, max=407, avg=122.37, stdev=51.68 00:12:28.939 clat percentiles (msec): 00:12:28.939 | 1.00th=[ 77], 5.00th=[ 85], 10.00th=[ 86], 20.00th=[ 89], 00:12:28.939 | 30.00th=[ 93], 40.00th=[ 97], 50.00th=[ 104], 60.00th=[ 111], 00:12:28.939 | 70.00th=[ 126], 80.00th=[ 148], 90.00th=[ 184], 95.00th=[ 222], 00:12:28.939 | 99.00th=[ 330], 99.50th=[ 397], 99.90th=[ 409], 99.95th=[ 409], 00:12:28.939 | 99.99th=[ 409] 00:12:28.939 bw ( KiB/s): min= 1788, max=11520, per=0.86%, avg=7163.37, stdev=3295.54, samples=19 00:12:28.939 iops : min= 13, max= 90, avg=55.84, stdev=25.86, samples=19 00:12:28.939 lat (msec) : 20=14.03%, 50=29.34%, 100=26.89%, 250=27.97%, 500=1.77% 00:12:28.939 cpu : usr=0.48%, sys=0.23%, ctx=1667, majf=0, minf=5 00:12:28.939 IO depths : 1=0.7%, 2=1.4%, 4=2.7%, 8=95.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:28.939 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:28.939 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:28.939 issued rwts: total=480,539,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:28.939 latency : target=0, window=0, percentile=100.00%, depth=8 00:12:28.939 job80: (groupid=0, jobs=1): err= 0: pid=71037: Tue Jul 23 02:07:37 2024 00:12:28.939 read: IOPS=62, BW=7967KiB/s (8158kB/s)(60.0MiB/7712msec) 00:12:28.939 slat (usec): min=6, max=1180, avg=68.26, stdev=135.42 00:12:28.939 clat (usec): min=8157, max=68229, avg=23655.19, stdev=10615.95 00:12:28.939 lat (usec): min=8357, max=68248, avg=23723.45, stdev=10620.21 00:12:28.939 clat percentiles (usec): 00:12:28.939 | 1.00th=[10290], 5.00th=[11994], 10.00th=[13304], 20.00th=[15270], 00:12:28.939 | 30.00th=[17433], 40.00th=[19792], 50.00th=[21627], 60.00th=[23462], 00:12:28.939 | 70.00th=[26084], 80.00th=[29230], 90.00th=[34341], 95.00th=[42730], 00:12:28.939 | 99.00th=[64226], 99.50th=[64750], 99.90th=[68682], 99.95th=[68682], 00:12:28.939 | 99.99th=[68682] 00:12:28.939 write: IOPS=57, BW=7409KiB/s (7587kB/s)(62.5MiB/8638msec); 0 zone resets 00:12:28.939 slat (usec): min=42, max=1605, avg=145.29, stdev=198.29 00:12:28.939 clat (msec): min=71, max=497, avg=136.71, stdev=76.57 00:12:28.939 lat (msec): min=71, max=497, avg=136.85, stdev=76.58 00:12:28.939 clat percentiles (msec): 00:12:28.939 | 1.00th=[ 79], 5.00th=[ 85], 10.00th=[ 86], 20.00th=[ 87], 00:12:28.939 | 30.00th=[ 91], 40.00th=[ 97], 50.00th=[ 106], 60.00th=[ 114], 00:12:28.939 | 70.00th=[ 132], 80.00th=[ 178], 90.00th=[ 245], 95.00th=[ 296], 00:12:28.939 | 99.00th=[ 439], 99.50th=[ 481], 99.90th=[ 498], 99.95th=[ 498], 00:12:28.939 | 99.99th=[ 498] 00:12:28.939 bw ( KiB/s): min= 768, max=11008, per=0.79%, avg=6626.58, stdev=3539.36, samples=19 00:12:28.939 iops : min= 6, max= 86, avg=51.63, stdev=27.79, samples=19 00:12:28.939 lat (msec) : 10=0.20%, 20=20.00%, 50=26.94%, 100=24.18%, 250=23.98% 00:12:28.939 lat (msec) : 500=4.69% 00:12:28.939 cpu : usr=0.47%, sys=0.14%, ctx=1599, majf=0, minf=9 00:12:28.939 IO depths : 1=0.7%, 2=1.4%, 4=2.9%, 8=95.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:28.939 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:28.939 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:28.939 issued rwts: total=480,500,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:28.939 latency : target=0, window=0, percentile=100.00%, depth=8 00:12:28.939 job81: (groupid=0, jobs=1): err= 0: pid=71038: Tue Jul 23 02:07:37 2024 00:12:28.939 read: IOPS=58, BW=7485KiB/s (7665kB/s)(60.0MiB/8208msec) 00:12:28.939 slat (usec): min=6, max=1489, avg=58.83, stdev=135.38 00:12:28.939 clat (msec): min=8, max=130, avg=25.58, stdev=15.20 00:12:28.939 lat (msec): min=9, max=130, avg=25.64, stdev=15.21 00:12:28.939 clat percentiles (msec): 00:12:28.939 | 1.00th=[ 13], 5.00th=[ 14], 10.00th=[ 15], 20.00th=[ 16], 00:12:28.939 | 30.00th=[ 18], 40.00th=[ 21], 50.00th=[ 24], 60.00th=[ 27], 00:12:28.939 | 70.00th=[ 29], 80.00th=[ 31], 90.00th=[ 37], 95.00th=[ 41], 00:12:28.939 | 99.00th=[ 126], 99.50th=[ 128], 99.90th=[ 131], 99.95th=[ 131], 00:12:28.939 | 99.99th=[ 131] 00:12:28.939 write: IOPS=65, BW=8322KiB/s (8522kB/s)(69.2MiB/8521msec); 0 zone resets 00:12:28.939 slat (usec): min=40, max=1779, avg=129.12, stdev=152.85 00:12:28.939 clat (msec): min=47, max=431, avg=120.99, stdev=57.39 00:12:28.939 lat (msec): min=47, max=431, avg=121.12, stdev=57.39 00:12:28.939 clat percentiles (msec): 00:12:28.939 | 1.00th=[ 54], 5.00th=[ 84], 10.00th=[ 85], 20.00th=[ 87], 00:12:28.939 | 30.00th=[ 90], 40.00th=[ 96], 50.00th=[ 102], 60.00th=[ 112], 00:12:28.939 | 70.00th=[ 123], 80.00th=[ 138], 90.00th=[ 169], 95.00th=[ 220], 00:12:28.939 | 99.00th=[ 384], 99.50th=[ 414], 99.90th=[ 430], 99.95th=[ 430], 00:12:28.939 | 99.99th=[ 430] 00:12:28.939 bw ( KiB/s): min= 512, max=11520, per=0.88%, avg=7352.74, stdev=3630.87, samples=19 00:12:28.939 iops : min= 4, max= 90, avg=57.37, stdev=28.42, samples=19 00:12:28.939 lat (msec) : 10=0.10%, 20=18.47%, 50=26.79%, 100=25.82%, 250=26.69% 00:12:28.939 lat (msec) : 500=2.13% 00:12:28.939 cpu : usr=0.48%, sys=0.15%, ctx=1660, majf=0, minf=5 00:12:28.939 IO depths : 1=0.7%, 2=1.4%, 4=2.7%, 8=95.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:28.939 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:28.939 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:28.939 issued rwts: total=480,554,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:28.939 latency : target=0, window=0, percentile=100.00%, depth=8 00:12:28.939 job82: (groupid=0, jobs=1): err= 0: pid=71039: Tue Jul 23 02:07:37 2024 00:12:28.939 read: IOPS=60, BW=7789KiB/s (7976kB/s)(60.0MiB/7888msec) 00:12:28.939 slat (usec): min=7, max=1272, avg=66.45, stdev=133.99 00:12:28.939 clat (usec): min=8006, max=63623, avg=25458.43, stdev=11732.97 00:12:28.939 lat (usec): min=8025, max=63639, avg=25524.88, stdev=11743.29 00:12:28.939 clat percentiles (usec): 00:12:28.939 | 1.00th=[ 8094], 5.00th=[ 8717], 10.00th=[11731], 20.00th=[15664], 00:12:28.939 | 30.00th=[18744], 40.00th=[21103], 50.00th=[22676], 60.00th=[26346], 00:12:28.939 | 70.00th=[30278], 80.00th=[33817], 90.00th=[40109], 95.00th=[50594], 00:12:28.939 | 99.00th=[61080], 99.50th=[61604], 99.90th=[63701], 99.95th=[63701], 00:12:28.939 | 99.99th=[63701] 00:12:28.939 write: IOPS=59, BW=7649KiB/s (7832kB/s)(63.8MiB/8535msec); 0 zone resets 00:12:28.939 slat (usec): min=28, max=5811, avg=140.79, stdev=308.33 00:12:28.939 clat (msec): min=35, max=394, avg=132.73, stdev=59.80 00:12:28.939 lat (msec): min=36, max=394, avg=132.87, stdev=59.79 00:12:28.939 clat percentiles (msec): 00:12:28.939 | 1.00th=[ 43], 5.00th=[ 85], 10.00th=[ 86], 20.00th=[ 89], 00:12:28.939 | 30.00th=[ 93], 40.00th=[ 101], 50.00th=[ 110], 60.00th=[ 126], 00:12:28.939 | 70.00th=[ 144], 80.00th=[ 167], 90.00th=[ 230], 95.00th=[ 255], 00:12:28.939 | 99.00th=[ 355], 99.50th=[ 372], 99.90th=[ 397], 99.95th=[ 397], 00:12:28.939 | 99.99th=[ 397] 00:12:28.939 bw ( KiB/s): min= 1792, max=11008, per=0.81%, avg=6762.95, stdev=3139.30, samples=19 00:12:28.939 iops : min= 14, max= 86, avg=52.74, stdev=24.58, samples=19 00:12:28.939 lat (msec) : 10=3.64%, 20=13.03%, 50=30.10%, 100=22.32%, 250=28.08% 00:12:28.939 lat (msec) : 500=2.83% 00:12:28.939 cpu : usr=0.42%, sys=0.20%, ctx=1613, majf=0, minf=5 00:12:28.939 IO depths : 1=0.7%, 2=1.4%, 4=2.8%, 8=95.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:28.939 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:28.939 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:28.939 issued rwts: total=480,510,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:28.939 latency : target=0, window=0, percentile=100.00%, depth=8 00:12:28.939 job83: (groupid=0, jobs=1): err= 0: pid=71040: Tue Jul 23 02:07:37 2024 00:12:28.939 read: IOPS=59, BW=7572KiB/s (7754kB/s)(60.0MiB/8114msec) 00:12:28.939 slat (usec): min=7, max=1787, avg=76.41, stdev=173.76 00:12:28.939 clat (usec): min=9221, max=73589, avg=28059.27, stdev=12908.46 00:12:28.939 lat (usec): min=9274, max=73607, avg=28135.67, stdev=12915.44 00:12:28.939 clat percentiles (usec): 00:12:28.939 | 1.00th=[ 9896], 5.00th=[11338], 10.00th=[12780], 20.00th=[15139], 00:12:28.939 | 30.00th=[20579], 40.00th=[23987], 50.00th=[27395], 60.00th=[29230], 00:12:28.939 | 70.00th=[31327], 80.00th=[35914], 90.00th=[45351], 95.00th=[55837], 00:12:28.939 | 99.00th=[66323], 99.50th=[68682], 99.90th=[73925], 99.95th=[73925], 00:12:28.939 | 99.99th=[73925] 00:12:28.939 write: IOPS=64, BW=8209KiB/s (8406kB/s)(67.1MiB/8373msec); 0 zone resets 00:12:28.939 slat (usec): min=44, max=7820, avg=155.02, stdev=404.31 00:12:28.939 clat (msec): min=67, max=422, avg=123.10, stdev=55.53 00:12:28.939 lat (msec): min=69, max=422, avg=123.25, stdev=55.52 00:12:28.939 clat percentiles (msec): 00:12:28.939 | 1.00th=[ 75], 5.00th=[ 84], 10.00th=[ 86], 20.00th=[ 87], 00:12:28.939 | 30.00th=[ 90], 40.00th=[ 94], 50.00th=[ 101], 60.00th=[ 112], 00:12:28.939 | 70.00th=[ 124], 80.00th=[ 150], 90.00th=[ 188], 95.00th=[ 234], 00:12:28.939 | 99.00th=[ 351], 99.50th=[ 380], 99.90th=[ 422], 99.95th=[ 422], 00:12:28.939 | 99.99th=[ 422] 00:12:28.939 bw ( KiB/s): min= 1536, max=11520, per=0.90%, avg=7537.17, stdev=3328.21, samples=18 00:12:28.939 iops : min= 12, max= 90, avg=58.78, stdev=26.12, samples=18 00:12:28.939 lat (msec) : 10=0.49%, 20=12.98%, 50=29.70%, 100=29.99%, 250=24.88% 00:12:28.939 lat (msec) : 500=1.97% 00:12:28.939 cpu : usr=0.45%, sys=0.20%, ctx=1727, majf=0, minf=1 00:12:28.939 IO depths : 1=0.7%, 2=1.4%, 4=2.8%, 8=95.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:28.939 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:28.939 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:28.939 issued rwts: total=480,537,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:28.939 latency : target=0, window=0, percentile=100.00%, depth=8 00:12:28.939 job84: (groupid=0, jobs=1): err= 0: pid=71041: Tue Jul 23 02:07:37 2024 00:12:28.939 read: IOPS=59, BW=7614KiB/s (7797kB/s)(60.0MiB/8069msec) 00:12:28.939 slat (usec): min=7, max=1193, avg=68.34, stdev=119.96 00:12:28.939 clat (usec): min=9440, max=70706, avg=24699.67, stdev=10709.27 00:12:28.939 lat (usec): min=9552, max=70716, avg=24768.01, stdev=10693.98 00:12:28.939 clat percentiles (usec): 00:12:28.939 | 1.00th=[ 9765], 5.00th=[10814], 10.00th=[11600], 20.00th=[14877], 00:12:28.940 | 30.00th=[17957], 40.00th=[21365], 50.00th=[23462], 60.00th=[26870], 00:12:28.940 | 70.00th=[29492], 80.00th=[32375], 90.00th=[35914], 95.00th=[44303], 00:12:28.940 | 99.00th=[61604], 99.50th=[68682], 99.90th=[70779], 99.95th=[70779], 00:12:28.940 | 99.99th=[70779] 00:12:28.940 write: IOPS=61, BW=7857KiB/s (8046kB/s)(65.8MiB/8569msec); 0 zone resets 00:12:28.940 slat (usec): min=43, max=2251, avg=128.66, stdev=158.25 00:12:28.940 clat (msec): min=23, max=464, avg=128.88, stdev=61.39 00:12:28.940 lat (msec): min=23, max=464, avg=129.01, stdev=61.39 00:12:28.940 clat percentiles (msec): 00:12:28.940 | 1.00th=[ 32], 5.00th=[ 84], 10.00th=[ 85], 20.00th=[ 86], 00:12:28.940 | 30.00th=[ 91], 40.00th=[ 97], 50.00th=[ 112], 60.00th=[ 122], 00:12:28.940 | 70.00th=[ 140], 80.00th=[ 157], 90.00th=[ 205], 95.00th=[ 255], 00:12:28.940 | 99.00th=[ 359], 99.50th=[ 426], 99.90th=[ 464], 99.95th=[ 464], 00:12:28.940 | 99.99th=[ 464] 00:12:28.940 bw ( KiB/s): min= 1788, max=12288, per=0.83%, avg=6978.68, stdev=3454.53, samples=19 00:12:28.940 iops : min= 13, max= 96, avg=54.37, stdev=27.23, samples=19 00:12:28.940 lat (msec) : 10=0.89%, 20=15.81%, 50=30.42%, 100=22.76%, 250=27.14% 00:12:28.940 lat (msec) : 500=2.98% 00:12:28.940 cpu : usr=0.42%, sys=0.24%, ctx=1606, majf=0, minf=3 00:12:28.940 IO depths : 1=0.7%, 2=1.4%, 4=2.8%, 8=95.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:28.940 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:28.940 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:28.940 issued rwts: total=480,526,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:28.940 latency : target=0, window=0, percentile=100.00%, depth=8 00:12:28.940 job85: (groupid=0, jobs=1): err= 0: pid=71042: Tue Jul 23 02:07:37 2024 00:12:28.940 read: IOPS=47, BW=6092KiB/s (6238kB/s)(40.0MiB/6724msec) 00:12:28.940 slat (usec): min=7, max=768, avg=59.71, stdev=95.86 00:12:28.940 clat (usec): min=7220, max=88956, avg=21607.48, stdev=14931.69 00:12:28.940 lat (usec): min=7241, max=89030, avg=21667.19, stdev=14929.46 00:12:28.940 clat percentiles (usec): 00:12:28.940 | 1.00th=[ 8586], 5.00th=[ 9503], 10.00th=[ 9896], 20.00th=[12125], 00:12:28.940 | 30.00th=[13698], 40.00th=[16188], 50.00th=[17957], 60.00th=[20055], 00:12:28.940 | 70.00th=[21365], 80.00th=[23987], 90.00th=[34866], 95.00th=[55313], 00:12:28.940 | 99.00th=[87557], 99.50th=[88605], 99.90th=[88605], 99.95th=[88605], 00:12:28.940 | 99.99th=[88605] 00:12:28.940 write: IOPS=48, BW=6271KiB/s (6422kB/s)(56.4MiB/9205msec); 0 zone resets 00:12:28.940 slat (usec): min=44, max=20079, avg=213.11, stdev=989.89 00:12:28.940 clat (msec): min=2, max=527, avg=162.47, stdev=65.81 00:12:28.940 lat (msec): min=4, max=527, avg=162.68, stdev=65.76 00:12:28.940 clat percentiles (msec): 00:12:28.940 | 1.00th=[ 26], 5.00th=[ 86], 10.00th=[ 94], 20.00th=[ 108], 00:12:28.940 | 30.00th=[ 125], 40.00th=[ 136], 50.00th=[ 157], 60.00th=[ 171], 00:12:28.940 | 70.00th=[ 184], 80.00th=[ 207], 90.00th=[ 251], 95.00th=[ 284], 00:12:28.940 | 99.00th=[ 359], 99.50th=[ 380], 99.90th=[ 527], 99.95th=[ 527], 00:12:28.940 | 99.99th=[ 527] 00:12:28.940 bw ( KiB/s): min= 512, max= 9728, per=0.68%, avg=5681.30, stdev=2199.46, samples=20 00:12:28.940 iops : min= 4, max= 76, avg=44.25, stdev=17.16, samples=20 00:12:28.940 lat (msec) : 4=0.13%, 10=4.54%, 20=20.49%, 50=14.92%, 100=9.73% 00:12:28.940 lat (msec) : 250=44.23%, 500=5.84%, 750=0.13% 00:12:28.940 cpu : usr=0.32%, sys=0.16%, ctx=1343, majf=0, minf=13 00:12:28.940 IO depths : 1=0.6%, 2=1.3%, 4=2.6%, 8=95.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:28.940 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:28.940 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:28.940 issued rwts: total=320,451,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:28.940 latency : target=0, window=0, percentile=100.00%, depth=8 00:12:28.940 job86: (groupid=0, jobs=1): err= 0: pid=71043: Tue Jul 23 02:07:37 2024 00:12:28.940 read: IOPS=58, BW=7528KiB/s (7709kB/s)(60.0MiB/8161msec) 00:12:28.940 slat (usec): min=6, max=1310, avg=59.96, stdev=110.71 00:12:28.940 clat (usec): min=12524, max=68728, avg=26042.60, stdev=10172.01 00:12:28.940 lat (usec): min=12631, max=68746, avg=26102.55, stdev=10185.72 00:12:28.940 clat percentiles (usec): 00:12:28.940 | 1.00th=[12780], 5.00th=[13698], 10.00th=[14615], 20.00th=[16909], 00:12:28.940 | 30.00th=[19530], 40.00th=[22152], 50.00th=[24249], 60.00th=[26346], 00:12:28.940 | 70.00th=[29492], 80.00th=[33424], 90.00th=[38011], 95.00th=[46400], 00:12:28.940 | 99.00th=[62653], 99.50th=[67634], 99.90th=[68682], 99.95th=[68682], 00:12:28.940 | 99.99th=[68682] 00:12:28.940 write: IOPS=64, BW=8206KiB/s (8403kB/s)(68.1MiB/8501msec); 0 zone resets 00:12:28.940 slat (usec): min=43, max=1521, avg=127.87, stdev=147.71 00:12:28.940 clat (msec): min=11, max=551, avg=123.42, stdev=66.79 00:12:28.940 lat (msec): min=11, max=551, avg=123.54, stdev=66.79 00:12:28.940 clat percentiles (msec): 00:12:28.940 | 1.00th=[ 19], 5.00th=[ 84], 10.00th=[ 85], 20.00th=[ 88], 00:12:28.940 | 30.00th=[ 91], 40.00th=[ 96], 50.00th=[ 105], 60.00th=[ 114], 00:12:28.940 | 70.00th=[ 125], 80.00th=[ 142], 90.00th=[ 167], 95.00th=[ 241], 00:12:28.940 | 99.00th=[ 397], 99.50th=[ 542], 99.90th=[ 550], 99.95th=[ 550], 00:12:28.940 | 99.99th=[ 550] 00:12:28.940 bw ( KiB/s): min= 766, max=12544, per=0.91%, avg=7650.83, stdev=3330.29, samples=18 00:12:28.940 iops : min= 5, max= 98, avg=59.67, stdev=26.18, samples=18 00:12:28.940 lat (msec) : 20=15.22%, 50=31.12%, 100=25.27%, 250=25.95%, 500=2.05% 00:12:28.940 lat (msec) : 750=0.39% 00:12:28.940 cpu : usr=0.45%, sys=0.19%, ctx=1684, majf=0, minf=1 00:12:28.940 IO depths : 1=0.7%, 2=1.4%, 4=2.7%, 8=95.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:28.940 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:28.940 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:28.940 issued rwts: total=480,545,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:28.940 latency : target=0, window=0, percentile=100.00%, depth=8 00:12:28.940 job87: (groupid=0, jobs=1): err= 0: pid=71044: Tue Jul 23 02:07:37 2024 00:12:28.940 read: IOPS=66, BW=8529KiB/s (8734kB/s)(55.5MiB/6663msec) 00:12:28.940 slat (usec): min=7, max=1430, avg=64.01, stdev=133.71 00:12:28.940 clat (usec): min=5832, max=97604, avg=18244.35, stdev=14469.44 00:12:28.940 lat (usec): min=5850, max=97618, avg=18308.36, stdev=14484.38 00:12:28.940 clat percentiles (usec): 00:12:28.940 | 1.00th=[ 6325], 5.00th=[ 8586], 10.00th=[ 9110], 20.00th=[10683], 00:12:28.940 | 30.00th=[11863], 40.00th=[12518], 50.00th=[13698], 60.00th=[16450], 00:12:28.940 | 70.00th=[18482], 80.00th=[20841], 90.00th=[30016], 95.00th=[39584], 00:12:28.940 | 99.00th=[91751], 99.50th=[93848], 99.90th=[98042], 99.95th=[98042], 00:12:28.940 | 99.99th=[98042] 00:12:28.940 write: IOPS=53, BW=6840KiB/s (7005kB/s)(60.0MiB/8982msec); 0 zone resets 00:12:28.940 slat (usec): min=41, max=1562, avg=130.58, stdev=147.68 00:12:28.940 clat (msec): min=77, max=369, avg=148.95, stdev=55.22 00:12:28.940 lat (msec): min=77, max=369, avg=149.08, stdev=55.21 00:12:28.940 clat percentiles (msec): 00:12:28.940 | 1.00th=[ 85], 5.00th=[ 88], 10.00th=[ 91], 20.00th=[ 99], 00:12:28.940 | 30.00th=[ 108], 40.00th=[ 121], 50.00th=[ 138], 60.00th=[ 157], 00:12:28.940 | 70.00th=[ 171], 80.00th=[ 194], 90.00th=[ 230], 95.00th=[ 253], 00:12:28.940 | 99.00th=[ 296], 99.50th=[ 309], 99.90th=[ 368], 99.95th=[ 368], 00:12:28.940 | 99.99th=[ 368] 00:12:28.940 bw ( KiB/s): min= 1792, max=10752, per=0.75%, avg=6275.79, stdev=2464.01, samples=19 00:12:28.940 iops : min= 14, max= 84, avg=48.79, stdev=19.28, samples=19 00:12:28.940 lat (msec) : 10=7.79%, 20=29.11%, 50=9.09%, 100=13.31%, 250=37.88% 00:12:28.940 lat (msec) : 500=2.81% 00:12:28.940 cpu : usr=0.38%, sys=0.23%, ctx=1473, majf=0, minf=7 00:12:28.940 IO depths : 1=0.6%, 2=1.3%, 4=2.6%, 8=95.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:28.940 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:28.940 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:28.940 issued rwts: total=444,480,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:28.940 latency : target=0, window=0, percentile=100.00%, depth=8 00:12:28.940 job88: (groupid=0, jobs=1): err= 0: pid=71045: Tue Jul 23 02:07:37 2024 00:12:28.940 read: IOPS=60, BW=7783KiB/s (7970kB/s)(60.0MiB/7894msec) 00:12:28.940 slat (usec): min=7, max=1820, avg=68.35, stdev=149.09 00:12:28.940 clat (msec): min=11, max=231, avg=34.04, stdev=36.17 00:12:28.940 lat (msec): min=13, max=231, avg=34.11, stdev=36.16 00:12:28.940 clat percentiles (msec): 00:12:28.940 | 1.00th=[ 14], 5.00th=[ 16], 10.00th=[ 17], 20.00th=[ 20], 00:12:28.940 | 30.00th=[ 22], 40.00th=[ 23], 50.00th=[ 25], 60.00th=[ 27], 00:12:28.940 | 70.00th=[ 30], 80.00th=[ 34], 90.00th=[ 46], 95.00th=[ 93], 00:12:28.940 | 99.00th=[ 224], 99.50th=[ 232], 99.90th=[ 232], 99.95th=[ 232], 00:12:28.940 | 99.99th=[ 232] 00:12:28.940 write: IOPS=68, BW=8782KiB/s (8993kB/s)(68.8MiB/8016msec); 0 zone resets 00:12:28.940 slat (usec): min=43, max=1566, avg=121.94, stdev=152.88 00:12:28.940 clat (msec): min=58, max=421, avg=115.20, stdev=49.02 00:12:28.940 lat (msec): min=58, max=421, avg=115.33, stdev=49.03 00:12:28.940 clat percentiles (msec): 00:12:28.940 | 1.00th=[ 65], 5.00th=[ 84], 10.00th=[ 85], 20.00th=[ 86], 00:12:28.940 | 30.00th=[ 88], 40.00th=[ 93], 50.00th=[ 99], 60.00th=[ 107], 00:12:28.940 | 70.00th=[ 117], 80.00th=[ 133], 90.00th=[ 163], 95.00th=[ 207], 00:12:28.940 | 99.00th=[ 326], 99.50th=[ 418], 99.90th=[ 422], 99.95th=[ 422], 00:12:28.940 | 99.99th=[ 422] 00:12:28.940 bw ( KiB/s): min= 1795, max=11776, per=0.97%, avg=8155.88, stdev=2841.88, samples=17 00:12:28.940 iops : min= 14, max= 92, avg=63.59, stdev=22.22, samples=17 00:12:28.940 lat (msec) : 20=9.71%, 50=33.30%, 100=29.90%, 250=25.63%, 500=1.46% 00:12:28.940 cpu : usr=0.40%, sys=0.24%, ctx=1663, majf=0, minf=5 00:12:28.940 IO depths : 1=0.7%, 2=1.4%, 4=2.7%, 8=95.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:28.940 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:28.940 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:28.940 issued rwts: total=480,550,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:28.940 latency : target=0, window=0, percentile=100.00%, depth=8 00:12:28.940 job89: (groupid=0, jobs=1): err= 0: pid=71046: Tue Jul 23 02:07:37 2024 00:12:28.940 read: IOPS=46, BW=5904KiB/s (6045kB/s)(40.0MiB/6938msec) 00:12:28.940 slat (usec): min=6, max=1168, avg=65.41, stdev=132.86 00:12:28.941 clat (msec): min=5, max=224, avg=18.41, stdev=30.76 00:12:28.941 lat (msec): min=5, max=224, avg=18.48, stdev=30.76 00:12:28.941 clat percentiles (msec): 00:12:28.941 | 1.00th=[ 8], 5.00th=[ 8], 10.00th=[ 9], 20.00th=[ 9], 00:12:28.941 | 30.00th=[ 10], 40.00th=[ 11], 50.00th=[ 13], 60.00th=[ 15], 00:12:28.941 | 70.00th=[ 17], 80.00th=[ 19], 90.00th=[ 21], 95.00th=[ 23], 00:12:28.941 | 99.00th=[ 213], 99.50th=[ 218], 99.90th=[ 226], 99.95th=[ 226], 00:12:28.941 | 99.99th=[ 226] 00:12:28.941 write: IOPS=51, BW=6634KiB/s (6793kB/s)(60.0MiB/9262msec); 0 zone resets 00:12:28.941 slat (usec): min=30, max=2468, avg=153.54, stdev=205.54 00:12:28.941 clat (msec): min=28, max=369, avg=153.49, stdev=62.39 00:12:28.941 lat (msec): min=28, max=369, avg=153.64, stdev=62.40 00:12:28.941 clat percentiles (msec): 00:12:28.941 | 1.00th=[ 34], 5.00th=[ 85], 10.00th=[ 86], 20.00th=[ 92], 00:12:28.941 | 30.00th=[ 106], 40.00th=[ 123], 50.00th=[ 146], 60.00th=[ 167], 00:12:28.941 | 70.00th=[ 180], 80.00th=[ 207], 90.00th=[ 243], 95.00th=[ 266], 00:12:28.941 | 99.00th=[ 317], 99.50th=[ 355], 99.90th=[ 372], 99.95th=[ 372], 00:12:28.941 | 99.99th=[ 372] 00:12:28.941 bw ( KiB/s): min= 1792, max=11008, per=0.71%, avg=5967.74, stdev=2563.42, samples=19 00:12:28.941 iops : min= 14, max= 86, avg=46.53, stdev=20.07, samples=19 00:12:28.941 lat (msec) : 10=14.13%, 20=19.62%, 50=6.25%, 100=14.88%, 250=40.12% 00:12:28.941 lat (msec) : 500=5.00% 00:12:28.941 cpu : usr=0.33%, sys=0.21%, ctx=1351, majf=0, minf=3 00:12:28.941 IO depths : 1=0.6%, 2=1.2%, 4=2.5%, 8=95.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:28.941 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:28.941 complete : 0=0.0%, 4=99.4%, 8=0.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:28.941 issued rwts: total=320,480,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:28.941 latency : target=0, window=0, percentile=100.00%, depth=8 00:12:28.941 job90: (groupid=0, jobs=1): err= 0: pid=71048: Tue Jul 23 02:07:37 2024 00:12:28.941 read: IOPS=45, BW=5886KiB/s (6027kB/s)(40.0MiB/6959msec) 00:12:28.941 slat (usec): min=6, max=735, avg=50.39, stdev=89.21 00:12:28.941 clat (msec): min=6, max=178, avg=29.22, stdev=29.83 00:12:28.941 lat (msec): min=6, max=178, avg=29.27, stdev=29.83 00:12:28.941 clat percentiles (msec): 00:12:28.941 | 1.00th=[ 9], 5.00th=[ 10], 10.00th=[ 12], 20.00th=[ 14], 00:12:28.941 | 30.00th=[ 15], 40.00th=[ 18], 50.00th=[ 20], 60.00th=[ 24], 00:12:28.941 | 70.00th=[ 26], 80.00th=[ 32], 90.00th=[ 61], 95.00th=[ 83], 00:12:28.941 | 99.00th=[ 169], 99.50th=[ 174], 99.90th=[ 180], 99.95th=[ 180], 00:12:28.941 | 99.99th=[ 180] 00:12:28.941 write: IOPS=51, BW=6530KiB/s (6687kB/s)(56.6MiB/8879msec); 0 zone resets 00:12:28.941 slat (usec): min=44, max=3326, avg=179.65, stdev=238.26 00:12:28.941 clat (msec): min=82, max=402, avg=155.89, stdev=67.37 00:12:28.941 lat (msec): min=82, max=402, avg=156.07, stdev=67.38 00:12:28.941 clat percentiles (msec): 00:12:28.941 | 1.00th=[ 85], 5.00th=[ 86], 10.00th=[ 90], 20.00th=[ 95], 00:12:28.941 | 30.00th=[ 107], 40.00th=[ 120], 50.00th=[ 136], 60.00th=[ 159], 00:12:28.941 | 70.00th=[ 178], 80.00th=[ 215], 90.00th=[ 255], 95.00th=[ 305], 00:12:28.941 | 99.00th=[ 347], 99.50th=[ 372], 99.90th=[ 401], 99.95th=[ 401], 00:12:28.941 | 99.99th=[ 401] 00:12:28.941 bw ( KiB/s): min= 252, max=10899, per=0.66%, avg=5555.68, stdev=2895.91, samples=19 00:12:28.941 iops : min= 1, max= 85, avg=42.79, stdev=22.80, samples=19 00:12:28.941 lat (msec) : 10=2.07%, 20=19.92%, 50=14.49%, 100=18.24%, 250=38.94% 00:12:28.941 lat (msec) : 500=6.34% 00:12:28.941 cpu : usr=0.36%, sys=0.20%, ctx=1330, majf=0, minf=5 00:12:28.941 IO depths : 1=0.6%, 2=1.3%, 4=2.6%, 8=95.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:28.941 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:28.941 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:28.941 issued rwts: total=320,453,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:28.941 latency : target=0, window=0, percentile=100.00%, depth=8 00:12:28.941 job91: (groupid=0, jobs=1): err= 0: pid=71052: Tue Jul 23 02:07:37 2024 00:12:28.941 read: IOPS=60, BW=7747KiB/s (7933kB/s)(60.0MiB/7931msec) 00:12:28.941 slat (usec): min=6, max=600, avg=60.40, stdev=90.98 00:12:28.941 clat (usec): min=11450, max=71779, avg=25491.20, stdev=10098.66 00:12:28.941 lat (usec): min=11478, max=71794, avg=25551.61, stdev=10107.10 00:12:28.941 clat percentiles (usec): 00:12:28.941 | 1.00th=[12518], 5.00th=[14222], 10.00th=[15139], 20.00th=[16712], 00:12:28.941 | 30.00th=[18220], 40.00th=[19792], 50.00th=[22414], 60.00th=[26608], 00:12:28.941 | 70.00th=[29754], 80.00th=[32900], 90.00th=[40109], 95.00th=[46400], 00:12:28.941 | 99.00th=[54264], 99.50th=[54789], 99.90th=[71828], 99.95th=[71828], 00:12:28.941 | 99.99th=[71828] 00:12:28.941 write: IOPS=63, BW=8165KiB/s (8361kB/s)(68.0MiB/8528msec); 0 zone resets 00:12:28.941 slat (usec): min=36, max=2171, avg=154.13, stdev=181.66 00:12:28.941 clat (msec): min=67, max=559, avg=124.13, stdev=64.13 00:12:28.941 lat (msec): min=67, max=559, avg=124.29, stdev=64.12 00:12:28.941 clat percentiles (msec): 00:12:28.941 | 1.00th=[ 74], 5.00th=[ 85], 10.00th=[ 87], 20.00th=[ 90], 00:12:28.941 | 30.00th=[ 92], 40.00th=[ 97], 50.00th=[ 103], 60.00th=[ 110], 00:12:28.941 | 70.00th=[ 120], 80.00th=[ 138], 90.00th=[ 180], 95.00th=[ 262], 00:12:28.941 | 99.00th=[ 405], 99.50th=[ 498], 99.90th=[ 558], 99.95th=[ 558], 00:12:28.941 | 99.99th=[ 558] 00:12:28.941 bw ( KiB/s): min= 255, max=11008, per=0.91%, avg=7629.83, stdev=3112.69, samples=18 00:12:28.941 iops : min= 1, max= 86, avg=59.44, stdev=24.51, samples=18 00:12:28.941 lat (msec) : 20=18.75%, 50=26.76%, 100=25.88%, 250=25.59%, 500=2.93% 00:12:28.941 lat (msec) : 750=0.10% 00:12:28.941 cpu : usr=0.44%, sys=0.27%, ctx=1735, majf=0, minf=3 00:12:28.941 IO depths : 1=0.7%, 2=1.4%, 4=2.7%, 8=95.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:28.941 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:28.941 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:28.941 issued rwts: total=480,544,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:28.941 latency : target=0, window=0, percentile=100.00%, depth=8 00:12:28.941 job92: (groupid=0, jobs=1): err= 0: pid=71054: Tue Jul 23 02:07:37 2024 00:12:28.941 read: IOPS=59, BW=7593KiB/s (7775kB/s)(60.0MiB/8092msec) 00:12:28.941 slat (usec): min=6, max=1661, avg=65.00, stdev=137.16 00:12:28.941 clat (msec): min=6, max=189, avg=30.94, stdev=28.67 00:12:28.941 lat (msec): min=6, max=189, avg=31.00, stdev=28.68 00:12:28.941 clat percentiles (msec): 00:12:28.941 | 1.00th=[ 7], 5.00th=[ 8], 10.00th=[ 12], 20.00th=[ 13], 00:12:28.941 | 30.00th=[ 14], 40.00th=[ 19], 50.00th=[ 24], 60.00th=[ 29], 00:12:28.941 | 70.00th=[ 33], 80.00th=[ 41], 90.00th=[ 56], 95.00th=[ 89], 00:12:28.941 | 99.00th=[ 176], 99.50th=[ 188], 99.90th=[ 190], 99.95th=[ 190], 00:12:28.941 | 99.99th=[ 190] 00:12:28.941 write: IOPS=59, BW=7603KiB/s (7785kB/s)(61.0MiB/8216msec); 0 zone resets 00:12:28.941 slat (usec): min=37, max=7379, avg=166.73, stdev=376.69 00:12:28.941 clat (msec): min=37, max=401, avg=133.29, stdev=61.31 00:12:28.941 lat (msec): min=37, max=401, avg=133.45, stdev=61.30 00:12:28.941 clat percentiles (msec): 00:12:28.941 | 1.00th=[ 43], 5.00th=[ 85], 10.00th=[ 87], 20.00th=[ 90], 00:12:28.941 | 30.00th=[ 94], 40.00th=[ 102], 50.00th=[ 108], 60.00th=[ 123], 00:12:28.941 | 70.00th=[ 148], 80.00th=[ 169], 90.00th=[ 209], 95.00th=[ 259], 00:12:28.941 | 99.00th=[ 376], 99.50th=[ 380], 99.90th=[ 401], 99.95th=[ 401], 00:12:28.941 | 99.99th=[ 401] 00:12:28.941 bw ( KiB/s): min= 768, max=11264, per=0.86%, avg=7225.53, stdev=3131.96, samples=17 00:12:28.941 iops : min= 6, max= 88, avg=56.29, stdev=24.42, samples=17 00:12:28.941 lat (msec) : 10=4.13%, 20=17.05%, 50=23.14%, 100=23.35%, 250=29.13% 00:12:28.941 lat (msec) : 500=3.20% 00:12:28.941 cpu : usr=0.49%, sys=0.20%, ctx=1661, majf=0, minf=7 00:12:28.941 IO depths : 1=0.7%, 2=1.4%, 4=2.9%, 8=94.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:28.941 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:28.941 complete : 0=0.0%, 4=99.2%, 8=0.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:28.941 issued rwts: total=480,488,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:28.941 latency : target=0, window=0, percentile=100.00%, depth=8 00:12:28.941 job93: (groupid=0, jobs=1): err= 0: pid=71060: Tue Jul 23 02:07:37 2024 00:12:28.941 read: IOPS=57, BW=7333KiB/s (7509kB/s)(60.0MiB/8378msec) 00:12:28.941 slat (usec): min=7, max=1804, avg=84.75, stdev=180.69 00:12:28.941 clat (usec): min=7646, max=85643, avg=23620.88, stdev=12682.89 00:12:28.941 lat (usec): min=7680, max=85862, avg=23705.62, stdev=12686.16 00:12:28.941 clat percentiles (usec): 00:12:28.941 | 1.00th=[ 7963], 5.00th=[10945], 10.00th=[12518], 20.00th=[14353], 00:12:28.941 | 30.00th=[16712], 40.00th=[17957], 50.00th=[20317], 60.00th=[22938], 00:12:28.941 | 70.00th=[25822], 80.00th=[29754], 90.00th=[39060], 95.00th=[46924], 00:12:28.941 | 99.00th=[83362], 99.50th=[84411], 99.90th=[85459], 99.95th=[85459], 00:12:28.941 | 99.99th=[85459] 00:12:28.941 write: IOPS=62, BW=8054KiB/s (8248kB/s)(68.1MiB/8661msec); 0 zone resets 00:12:28.941 slat (usec): min=35, max=2295, avg=163.57, stdev=227.21 00:12:28.941 clat (msec): min=4, max=524, avg=126.31, stdev=71.01 00:12:28.941 lat (msec): min=4, max=524, avg=126.47, stdev=71.00 00:12:28.941 clat percentiles (msec): 00:12:28.941 | 1.00th=[ 7], 5.00th=[ 57], 10.00th=[ 86], 20.00th=[ 90], 00:12:28.941 | 30.00th=[ 94], 40.00th=[ 99], 50.00th=[ 105], 60.00th=[ 112], 00:12:28.942 | 70.00th=[ 129], 80.00th=[ 153], 90.00th=[ 218], 95.00th=[ 271], 00:12:28.942 | 99.00th=[ 430], 99.50th=[ 472], 99.90th=[ 523], 99.95th=[ 523], 00:12:28.942 | 99.99th=[ 523] 00:12:28.942 bw ( KiB/s): min= 1280, max=16929, per=0.91%, avg=7638.33, stdev=3788.86, samples=18 00:12:28.942 iops : min= 10, max= 132, avg=59.56, stdev=29.69, samples=18 00:12:28.942 lat (msec) : 10=3.71%, 20=21.17%, 50=23.12%, 100=22.15%, 250=26.24% 00:12:28.942 lat (msec) : 500=3.51%, 750=0.10% 00:12:28.942 cpu : usr=0.51%, sys=0.22%, ctx=1718, majf=0, minf=5 00:12:28.942 IO depths : 1=0.7%, 2=1.4%, 4=2.7%, 8=95.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:28.942 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:28.942 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:28.942 issued rwts: total=480,545,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:28.942 latency : target=0, window=0, percentile=100.00%, depth=8 00:12:28.942 job94: (groupid=0, jobs=1): err= 0: pid=71061: Tue Jul 23 02:07:37 2024 00:12:28.942 read: IOPS=48, BW=6269KiB/s (6419kB/s)(40.0MiB/6534msec) 00:12:28.942 slat (usec): min=7, max=1203, avg=83.46, stdev=155.65 00:12:28.942 clat (msec): min=5, max=305, avg=33.71, stdev=47.79 00:12:28.942 lat (msec): min=5, max=305, avg=33.79, stdev=47.79 00:12:28.942 clat percentiles (msec): 00:12:28.942 | 1.00th=[ 9], 5.00th=[ 11], 10.00th=[ 12], 20.00th=[ 14], 00:12:28.942 | 30.00th=[ 16], 40.00th=[ 20], 50.00th=[ 22], 60.00th=[ 25], 00:12:28.942 | 70.00th=[ 29], 80.00th=[ 33], 90.00th=[ 45], 95.00th=[ 114], 00:12:28.942 | 99.00th=[ 300], 99.50th=[ 305], 99.90th=[ 305], 99.95th=[ 305], 00:12:28.942 | 99.99th=[ 305] 00:12:28.942 write: IOPS=48, BW=6216KiB/s (6365kB/s)(52.9MiB/8710msec); 0 zone resets 00:12:28.942 slat (usec): min=32, max=2430, avg=167.06, stdev=206.73 00:12:28.942 clat (msec): min=86, max=456, avg=163.99, stdev=57.80 00:12:28.942 lat (msec): min=86, max=456, avg=164.16, stdev=57.81 00:12:28.942 clat percentiles (msec): 00:12:28.942 | 1.00th=[ 89], 5.00th=[ 94], 10.00th=[ 100], 20.00th=[ 110], 00:12:28.942 | 30.00th=[ 126], 40.00th=[ 142], 50.00th=[ 157], 60.00th=[ 169], 00:12:28.942 | 70.00th=[ 178], 80.00th=[ 213], 90.00th=[ 241], 95.00th=[ 279], 00:12:28.942 | 99.00th=[ 321], 99.50th=[ 372], 99.90th=[ 456], 99.95th=[ 456], 00:12:28.942 | 99.99th=[ 456] 00:12:28.942 bw ( KiB/s): min= 3541, max= 9124, per=0.68%, avg=5717.41, stdev=1662.48, samples=17 00:12:28.942 iops : min= 27, max= 71, avg=44.06, stdev=13.08, samples=17 00:12:28.942 lat (msec) : 10=2.02%, 20=16.29%, 50=20.46%, 100=7.54%, 250=48.18% 00:12:28.942 lat (msec) : 500=5.52% 00:12:28.942 cpu : usr=0.39%, sys=0.11%, ctx=1316, majf=0, minf=7 00:12:28.942 IO depths : 1=0.7%, 2=1.3%, 4=2.7%, 8=95.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:28.942 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:28.942 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:28.942 issued rwts: total=320,423,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:28.942 latency : target=0, window=0, percentile=100.00%, depth=8 00:12:28.942 job95: (groupid=0, jobs=1): err= 0: pid=71063: Tue Jul 23 02:07:37 2024 00:12:28.942 read: IOPS=58, BW=7508KiB/s (7688kB/s)(60.0MiB/8183msec) 00:12:28.942 slat (usec): min=6, max=1970, avg=65.15, stdev=159.63 00:12:28.942 clat (usec): min=5763, max=58905, avg=17982.86, stdev=7538.63 00:12:28.942 lat (usec): min=5964, max=58914, avg=18048.01, stdev=7536.16 00:12:28.942 clat percentiles (usec): 00:12:28.942 | 1.00th=[ 8094], 5.00th=[10028], 10.00th=[10552], 20.00th=[11863], 00:12:28.942 | 30.00th=[12780], 40.00th=[14222], 50.00th=[15533], 60.00th=[17695], 00:12:28.942 | 70.00th=[20841], 80.00th=[23725], 90.00th=[29230], 95.00th=[32375], 00:12:28.942 | 99.00th=[41681], 99.50th=[47973], 99.90th=[58983], 99.95th=[58983], 00:12:28.942 | 99.99th=[58983] 00:12:28.942 write: IOPS=61, BW=7930KiB/s (8121kB/s)(69.5MiB/8974msec); 0 zone resets 00:12:28.942 slat (usec): min=33, max=1886, avg=140.05, stdev=177.59 00:12:28.942 clat (msec): min=57, max=469, avg=128.32, stdev=63.83 00:12:28.942 lat (msec): min=57, max=469, avg=128.46, stdev=63.84 00:12:28.942 clat percentiles (msec): 00:12:28.942 | 1.00th=[ 65], 5.00th=[ 85], 10.00th=[ 86], 20.00th=[ 88], 00:12:28.942 | 30.00th=[ 93], 40.00th=[ 96], 50.00th=[ 105], 60.00th=[ 116], 00:12:28.942 | 70.00th=[ 127], 80.00th=[ 159], 90.00th=[ 209], 95.00th=[ 259], 00:12:28.942 | 99.00th=[ 409], 99.50th=[ 418], 99.90th=[ 468], 99.95th=[ 468], 00:12:28.942 | 99.99th=[ 468] 00:12:28.942 bw ( KiB/s): min= 1792, max=11776, per=0.88%, avg=7389.42, stdev=3016.56, samples=19 00:12:28.942 iops : min= 14, max= 92, avg=57.53, stdev=23.79, samples=19 00:12:28.942 lat (msec) : 10=2.22%, 20=29.25%, 50=14.67%, 100=24.42%, 250=26.64% 00:12:28.942 lat (msec) : 500=2.80% 00:12:28.942 cpu : usr=0.53%, sys=0.16%, ctx=1700, majf=0, minf=3 00:12:28.942 IO depths : 1=0.7%, 2=1.4%, 4=2.7%, 8=95.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:28.942 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:28.942 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:28.942 issued rwts: total=480,556,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:28.942 latency : target=0, window=0, percentile=100.00%, depth=8 00:12:28.942 job96: (groupid=0, jobs=1): err= 0: pid=71064: Tue Jul 23 02:07:37 2024 00:12:28.942 read: IOPS=63, BW=8191KiB/s (8387kB/s)(60.0MiB/7501msec) 00:12:28.942 slat (usec): min=6, max=1200, avg=84.83, stdev=159.96 00:12:28.942 clat (usec): min=13960, max=99445, avg=28126.35, stdev=13244.33 00:12:28.942 lat (usec): min=14028, max=99460, avg=28211.18, stdev=13233.95 00:12:28.942 clat percentiles (usec): 00:12:28.942 | 1.00th=[14877], 5.00th=[15664], 10.00th=[16581], 20.00th=[18220], 00:12:28.942 | 30.00th=[20841], 40.00th=[22676], 50.00th=[24511], 60.00th=[26608], 00:12:28.942 | 70.00th=[28705], 80.00th=[33817], 90.00th=[44827], 95.00th=[56886], 00:12:28.942 | 99.00th=[74974], 99.50th=[96994], 99.90th=[99091], 99.95th=[99091], 00:12:28.942 | 99.99th=[99091] 00:12:28.942 write: IOPS=63, BW=8160KiB/s (8356kB/s)(66.8MiB/8376msec); 0 zone resets 00:12:28.942 slat (usec): min=33, max=3064, avg=149.22, stdev=233.54 00:12:28.942 clat (msec): min=24, max=512, avg=124.02, stdev=60.84 00:12:28.942 lat (msec): min=24, max=512, avg=124.17, stdev=60.84 00:12:28.942 clat percentiles (msec): 00:12:28.942 | 1.00th=[ 33], 5.00th=[ 85], 10.00th=[ 88], 20.00th=[ 91], 00:12:28.942 | 30.00th=[ 93], 40.00th=[ 96], 50.00th=[ 101], 60.00th=[ 110], 00:12:28.942 | 70.00th=[ 125], 80.00th=[ 148], 90.00th=[ 176], 95.00th=[ 259], 00:12:28.942 | 99.00th=[ 359], 99.50th=[ 489], 99.90th=[ 514], 99.95th=[ 514], 00:12:28.942 | 99.99th=[ 514] 00:12:28.942 bw ( KiB/s): min= 768, max=11264, per=0.81%, avg=6745.95, stdev=3862.13, samples=20 00:12:28.942 iops : min= 6, max= 88, avg=52.55, stdev=30.36, samples=20 00:12:28.942 lat (msec) : 20=12.92%, 50=31.46%, 100=28.80%, 250=23.96%, 500=2.66% 00:12:28.942 lat (msec) : 750=0.20% 00:12:28.942 cpu : usr=0.46%, sys=0.17%, ctx=1675, majf=0, minf=5 00:12:28.942 IO depths : 1=0.7%, 2=1.4%, 4=2.8%, 8=95.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:28.942 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:28.942 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:28.942 issued rwts: total=480,534,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:28.942 latency : target=0, window=0, percentile=100.00%, depth=8 00:12:28.942 job97: (groupid=0, jobs=1): err= 0: pid=71065: Tue Jul 23 02:07:37 2024 00:12:28.942 read: IOPS=63, BW=8119KiB/s (8314kB/s)(60.0MiB/7567msec) 00:12:28.942 slat (usec): min=7, max=836, avg=64.92, stdev=99.26 00:12:28.942 clat (usec): min=13995, max=69783, avg=27949.30, stdev=10044.21 00:12:28.942 lat (usec): min=14008, max=69795, avg=28014.22, stdev=10047.91 00:12:28.942 clat percentiles (usec): 00:12:28.942 | 1.00th=[14615], 5.00th=[16450], 10.00th=[17695], 20.00th=[19268], 00:12:28.942 | 30.00th=[20841], 40.00th=[23725], 50.00th=[25035], 60.00th=[27132], 00:12:28.942 | 70.00th=[32375], 80.00th=[36439], 90.00th=[40109], 95.00th=[47449], 00:12:28.942 | 99.00th=[65274], 99.50th=[65799], 99.90th=[69731], 99.95th=[69731], 00:12:28.942 | 99.99th=[69731] 00:12:28.942 write: IOPS=63, BW=8117KiB/s (8311kB/s)(66.4MiB/8374msec); 0 zone resets 00:12:28.942 slat (usec): min=44, max=4979, avg=169.94, stdev=267.17 00:12:28.942 clat (msec): min=78, max=418, avg=124.66, stdev=55.68 00:12:28.942 lat (msec): min=78, max=418, avg=124.83, stdev=55.67 00:12:28.942 clat percentiles (msec): 00:12:28.942 | 1.00th=[ 83], 5.00th=[ 86], 10.00th=[ 88], 20.00th=[ 90], 00:12:28.942 | 30.00th=[ 94], 40.00th=[ 99], 50.00th=[ 105], 60.00th=[ 112], 00:12:28.942 | 70.00th=[ 123], 80.00th=[ 144], 90.00th=[ 171], 95.00th=[ 271], 00:12:28.942 | 99.00th=[ 334], 99.50th=[ 376], 99.90th=[ 418], 99.95th=[ 418], 00:12:28.942 | 99.99th=[ 418] 00:12:28.942 bw ( KiB/s): min= 1774, max=10899, per=0.83%, avg=6919.33, stdev=3300.93, samples=18 00:12:28.942 iops : min= 13, max= 85, avg=53.50, stdev=26.01, samples=18 00:12:28.942 lat (msec) : 20=11.47%, 50=34.42%, 100=25.02%, 250=26.01%, 500=3.07% 00:12:28.942 cpu : usr=0.50%, sys=0.23%, ctx=1770, majf=0, minf=1 00:12:28.942 IO depths : 1=0.7%, 2=1.4%, 4=2.8%, 8=95.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:28.942 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:28.942 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:28.942 issued rwts: total=480,531,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:28.942 latency : target=0, window=0, percentile=100.00%, depth=8 00:12:28.942 job98: (groupid=0, jobs=1): err= 0: pid=71066: Tue Jul 23 02:07:37 2024 00:12:28.942 read: IOPS=59, BW=7612KiB/s (7795kB/s)(60.0MiB/8071msec) 00:12:28.942 slat (usec): min=7, max=1483, avg=65.21, stdev=142.06 00:12:28.942 clat (usec): min=9174, max=61967, avg=25839.61, stdev=11776.88 00:12:28.942 lat (usec): min=9592, max=61978, avg=25904.82, stdev=11774.74 00:12:28.943 clat percentiles (usec): 00:12:28.943 | 1.00th=[10028], 5.00th=[11076], 10.00th=[12387], 20.00th=[15664], 00:12:28.943 | 30.00th=[18220], 40.00th=[20841], 50.00th=[23725], 60.00th=[26346], 00:12:28.943 | 70.00th=[29754], 80.00th=[33424], 90.00th=[42206], 95.00th=[51643], 00:12:28.943 | 99.00th=[59507], 99.50th=[60031], 99.90th=[62129], 99.95th=[62129], 00:12:28.943 | 99.99th=[62129] 00:12:28.943 write: IOPS=64, BW=8316KiB/s (8516kB/s)(69.2MiB/8527msec); 0 zone resets 00:12:28.943 slat (usec): min=34, max=2953, avg=171.65, stdev=257.46 00:12:28.943 clat (msec): min=8, max=422, avg=122.13, stdev=55.41 00:12:28.943 lat (msec): min=8, max=422, avg=122.30, stdev=55.43 00:12:28.943 clat percentiles (msec): 00:12:28.943 | 1.00th=[ 14], 5.00th=[ 85], 10.00th=[ 86], 20.00th=[ 89], 00:12:28.943 | 30.00th=[ 93], 40.00th=[ 97], 50.00th=[ 104], 60.00th=[ 110], 00:12:28.943 | 70.00th=[ 126], 80.00th=[ 146], 90.00th=[ 190], 95.00th=[ 247], 00:12:28.943 | 99.00th=[ 338], 99.50th=[ 363], 99.90th=[ 422], 99.95th=[ 422], 00:12:28.943 | 99.99th=[ 422] 00:12:28.943 bw ( KiB/s): min= 1024, max=13285, per=0.83%, avg=6986.55, stdev=3565.71, samples=20 00:12:28.943 iops : min= 8, max= 103, avg=54.45, stdev=27.87, samples=20 00:12:28.943 lat (msec) : 10=0.39%, 20=18.28%, 50=26.40%, 100=26.50%, 250=26.21% 00:12:28.943 lat (msec) : 500=2.22% 00:12:28.943 cpu : usr=0.43%, sys=0.32%, ctx=1642, majf=0, minf=1 00:12:28.943 IO depths : 1=0.7%, 2=1.4%, 4=2.7%, 8=95.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:28.943 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:28.943 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:28.943 issued rwts: total=480,554,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:28.943 latency : target=0, window=0, percentile=100.00%, depth=8 00:12:28.943 job99: (groupid=0, jobs=1): err= 0: pid=71067: Tue Jul 23 02:07:37 2024 00:12:28.943 read: IOPS=61, BW=7906KiB/s (8096kB/s)(59.2MiB/7674msec) 00:12:28.943 slat (usec): min=6, max=1367, avg=68.05, stdev=139.51 00:12:28.943 clat (msec): min=10, max=241, avg=33.44, stdev=31.39 00:12:28.943 lat (msec): min=10, max=241, avg=33.51, stdev=31.40 00:12:28.943 clat percentiles (msec): 00:12:28.943 | 1.00th=[ 12], 5.00th=[ 14], 10.00th=[ 14], 20.00th=[ 16], 00:12:28.943 | 30.00th=[ 19], 40.00th=[ 22], 50.00th=[ 26], 60.00th=[ 30], 00:12:28.943 | 70.00th=[ 34], 80.00th=[ 44], 90.00th=[ 53], 95.00th=[ 71], 00:12:28.943 | 99.00th=[ 194], 99.50th=[ 222], 99.90th=[ 243], 99.95th=[ 243], 00:12:28.943 | 99.99th=[ 243] 00:12:28.943 write: IOPS=59, BW=7654KiB/s (7838kB/s)(60.0MiB/8027msec); 0 zone resets 00:12:28.943 slat (usec): min=45, max=4418, avg=173.92, stdev=268.93 00:12:28.943 clat (msec): min=44, max=486, avg=132.03, stdev=72.48 00:12:28.943 lat (msec): min=44, max=486, avg=132.21, stdev=72.49 00:12:28.943 clat percentiles (msec): 00:12:28.943 | 1.00th=[ 50], 5.00th=[ 87], 10.00th=[ 89], 20.00th=[ 90], 00:12:28.943 | 30.00th=[ 94], 40.00th=[ 99], 50.00th=[ 105], 60.00th=[ 117], 00:12:28.943 | 70.00th=[ 130], 80.00th=[ 159], 90.00th=[ 197], 95.00th=[ 271], 00:12:28.943 | 99.00th=[ 439], 99.50th=[ 451], 99.90th=[ 485], 99.95th=[ 485], 00:12:28.943 | 99.99th=[ 485] 00:12:28.943 bw ( KiB/s): min= 1792, max=10496, per=0.82%, avg=6825.22, stdev=3294.06, samples=18 00:12:28.943 iops : min= 14, max= 82, avg=53.22, stdev=25.73, samples=18 00:12:28.943 lat (msec) : 20=15.83%, 50=27.15%, 100=26.31%, 250=27.88%, 500=2.83% 00:12:28.943 cpu : usr=0.46%, sys=0.23%, ctx=1589, majf=0, minf=10 00:12:28.943 IO depths : 1=0.6%, 2=1.3%, 4=2.5%, 8=95.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:28.943 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:28.943 complete : 0=0.0%, 4=99.3%, 8=0.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:28.943 issued rwts: total=474,480,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:28.943 latency : target=0, window=0, percentile=100.00%, depth=8 00:12:28.943 00:12:28.943 Run status group 0 (all jobs): 00:12:28.943 READ: bw=730MiB/s (765MB/s), 5758KiB/s-11.5MiB/s (5897kB/s-12.0MB/s), io=6548MiB (6866MB), run=6409-8974msec 00:12:28.943 WRITE: bw=817MiB/s (857MB/s), 5768KiB/s-11.8MiB/s (5906kB/s-12.4MB/s), io=7569MiB (7936MB), run=7910-9263msec 00:12:28.943 00:12:28.943 Disk stats (read/write): 00:12:28.943 sdb: ios=515/559, merge=0/0, ticks=8976/67871, in_queue=76847, util=75.73% 00:12:28.943 sdd: ios=362/480, merge=0/0, ticks=11913/64454, in_queue=76367, util=75.67% 00:12:28.943 sdf: ios=514/519, merge=0/0, ticks=14536/61500, in_queue=76037, util=75.71% 00:12:28.943 sdh: ios=515/549, merge=0/0, ticks=9239/67876, in_queue=77116, util=76.55% 00:12:28.943 sdi: ios=515/519, merge=0/0, ticks=13827/61566, in_queue=75394, util=76.21% 00:12:28.943 sdm: ios=515/498, merge=0/0, ticks=13119/62670, in_queue=75789, util=76.73% 00:12:28.943 sdq: ios=515/532, merge=0/0, ticks=13333/63973, in_queue=77307, util=76.82% 00:12:28.943 sdt: ios=354/458, merge=0/0, ticks=5685/71116, in_queue=76801, util=77.34% 00:12:28.943 sdu: ios=334/406, merge=0/0, ticks=8641/68781, in_queue=77423, util=77.19% 00:12:28.943 sdy: ios=504/495, merge=0/0, ticks=13867/61678, in_queue=75545, util=77.26% 00:12:28.943 sdj: ios=679/787, merge=0/0, ticks=9554/67335, in_queue=76889, util=77.71% 00:12:28.943 sdl: ios=640/640, merge=0/0, ticks=11451/65607, in_queue=77058, util=78.23% 00:12:28.943 sdo: ios=684/800, merge=0/0, ticks=8658/68997, in_queue=77655, util=78.60% 00:12:28.943 sdr: ios=642/791, merge=0/0, ticks=8754/67640, in_queue=76395, util=79.02% 00:12:28.943 sdv: ios=641/649, merge=0/0, ticks=13112/62666, in_queue=75778, util=79.60% 00:12:28.943 sdx: ios=642/760, merge=0/0, ticks=9481/66618, in_queue=76100, util=79.91% 00:12:28.943 sdz: ios=642/681, merge=0/0, ticks=16631/59837, in_queue=76469, util=79.91% 00:12:28.943 sdab: ios=641/763, merge=0/0, ticks=12475/63921, in_queue=76397, util=80.13% 00:12:28.943 sdac: ios=641/680, merge=0/0, ticks=11442/64738, in_queue=76180, util=80.17% 00:12:28.943 sdad: ios=743/800, merge=0/0, ticks=9742/67664, in_queue=77407, util=80.79% 00:12:28.943 sdae: ios=642/740, merge=0/0, ticks=9548/66783, in_queue=76331, util=80.67% 00:12:28.943 sdaf: ios=642/750, merge=0/0, ticks=11398/64882, in_queue=76281, util=81.19% 00:12:28.943 sdah: ios=537/640, merge=0/0, ticks=12717/64430, in_queue=77147, util=81.41% 00:12:28.943 sdaj: ios=635/640, merge=0/0, ticks=9818/64228, in_queue=74046, util=79.03% 00:12:28.943 sdal: ios=642/791, merge=0/0, ticks=10405/66231, in_queue=76637, util=81.81% 00:12:28.943 sdao: ios=642/769, merge=0/0, ticks=12944/63521, in_queue=76465, util=82.44% 00:12:28.943 sdaq: ios=642/768, merge=0/0, ticks=10863/65308, in_queue=76171, util=82.38% 00:12:28.943 sdat: ios=676/659, merge=0/0, ticks=14493/61962, in_queue=76455, util=82.37% 00:12:28.943 sday: ios=642/778, merge=0/0, ticks=11412/65100, in_queue=76512, util=82.93% 00:12:28.943 sdba: ios=679/783, merge=0/0, ticks=11327/64360, in_queue=75687, util=81.63% 00:12:28.943 sdag: ios=480/497, merge=0/0, ticks=11908/63557, in_queue=75465, util=83.28% 00:12:28.943 sdai: ios=320/475, merge=0/0, ticks=7944/69029, in_queue=76973, util=83.63% 00:12:28.943 sdak: ios=320/392, merge=0/0, ticks=7728/69389, in_queue=77118, util=83.37% 00:12:28.943 sdam: ios=333/480, merge=0/0, ticks=9387/67589, in_queue=76976, util=83.55% 00:12:28.943 sdan: ios=480/496, merge=0/0, ticks=15507/59765, in_queue=75272, util=83.64% 00:12:28.943 sdap: ios=480/508, merge=0/0, ticks=15327/60148, in_queue=75475, util=83.97% 00:12:28.943 sdar: ios=480/511, merge=0/0, ticks=14231/60565, in_queue=74797, util=84.35% 00:12:28.943 sdau: ios=481/524, merge=0/0, ticks=11706/64089, in_queue=75796, util=84.21% 00:12:28.943 sdaw: ios=481/519, merge=0/0, ticks=13027/62858, in_queue=75886, util=84.60% 00:12:28.943 sdaz: ios=481/543, merge=0/0, ticks=10163/66978, in_queue=77141, util=84.83% 00:12:28.943 sdas: ios=480/481, merge=0/0, ticks=9206/66767, in_queue=75973, util=84.82% 00:12:28.943 sdav: ios=480/489, merge=0/0, ticks=11447/62981, in_queue=74429, util=84.95% 00:12:28.943 sdax: ios=481/533, merge=0/0, ticks=15007/61545, in_queue=76553, util=85.30% 00:12:28.943 sdbb: ios=480/506, merge=0/0, ticks=10736/65317, in_queue=76053, util=85.11% 00:12:28.943 sdbc: ios=482/536, merge=0/0, ticks=9231/67795, in_queue=77026, util=85.37% 00:12:28.943 sdbd: ios=481/540, merge=0/0, ticks=10982/65596, in_queue=76578, util=85.79% 00:12:28.943 sdbe: ios=320/470, merge=0/0, ticks=6059/70562, in_queue=76621, util=85.70% 00:12:28.943 sdbg: ios=481/532, merge=0/0, ticks=13082/62597, in_queue=75679, util=85.10% 00:12:28.943 sdbi: ios=320/395, merge=0/0, ticks=10583/66202, in_queue=76785, util=85.98% 00:12:28.943 sdbk: ios=480/516, merge=0/0, ticks=12312/62849, in_queue=75161, util=86.08% 00:12:28.943 sdbf: ios=642/789, merge=0/0, ticks=10703/65257, in_queue=75961, util=86.49% 00:12:28.943 sdbh: ios=642/789, merge=0/0, ticks=10245/66086, in_queue=76332, util=86.56% 00:12:28.943 sdbj: ios=642/752, merge=0/0, ticks=9571/66645, in_queue=76216, util=86.60% 00:12:28.943 sdbl: ios=642/792, merge=0/0, ticks=11653/64383, in_queue=76036, util=87.08% 00:12:28.943 sdbm: ios=640/648, merge=0/0, ticks=11938/64084, in_queue=76022, util=87.50% 00:12:28.943 sdbn: ios=607/640, merge=0/0, ticks=14719/61701, in_queue=76420, util=87.22% 00:12:28.943 sdbo: ios=642/763, merge=0/0, ticks=9875/66736, in_queue=76612, util=88.03% 00:12:28.943 sdbp: ios=641/738, merge=0/0, ticks=9318/67195, in_queue=76514, util=88.39% 00:12:28.943 sdbq: ios=642/739, merge=0/0, ticks=9439/66638, in_queue=76077, util=88.39% 00:12:28.943 sdbs: ios=501/640, merge=0/0, ticks=14320/62803, in_queue=77124, util=88.46% 00:12:28.943 sdbr: ios=641/697, merge=0/0, ticks=7897/68400, in_queue=76298, util=88.70% 00:12:28.943 sdbt: ios=642/762, merge=0/0, ticks=11628/64271, in_queue=75900, util=88.94% 00:12:28.943 sdbv: ios=676/782, merge=0/0, ticks=9317/68689, in_queue=78006, util=89.47% 00:12:28.943 sdby: ios=642/732, merge=0/0, ticks=12576/62773, in_queue=75349, util=89.55% 00:12:28.943 sdca: ios=642/701, merge=0/0, ticks=11736/63195, in_queue=74932, util=89.10% 00:12:28.943 sdcc: ios=642/772, merge=0/0, ticks=13109/63603, in_queue=76713, util=89.84% 00:12:28.943 sdcg: ios=642/763, merge=0/0, ticks=12831/63049, in_queue=75880, util=89.52% 00:12:28.943 sdcj: ios=641/665, merge=0/0, ticks=9886/66712, in_queue=76598, util=90.00% 00:12:28.943 sdcm: ios=490/640, merge=0/0, ticks=7988/67929, in_queue=75917, util=90.49% 00:12:28.943 sdcr: ios=677/792, merge=0/0, ticks=10597/66967, in_queue=77565, util=90.93% 00:12:28.943 sdbu: ios=480/502, merge=0/0, ticks=12500/62955, in_queue=75455, util=91.05% 00:12:28.943 sdbw: ios=320/466, merge=0/0, ticks=6992/69501, in_queue=76493, util=91.50% 00:12:28.943 sdbz: ios=481/530, merge=0/0, ticks=10548/66049, in_queue=76597, util=91.73% 00:12:28.943 sdce: ios=481/522, merge=0/0, ticks=14336/60788, in_queue=75124, util=91.83% 00:12:28.943 sdcf: ios=480/485, merge=0/0, ticks=8880/65668, in_queue=74549, util=92.23% 00:12:28.943 sdch: ios=341/480, merge=0/0, ticks=6534/68909, in_queue=75444, util=91.84% 00:12:28.944 sdck: ios=320/386, merge=0/0, ticks=13962/63389, in_queue=77351, util=92.38% 00:12:28.944 sdcn: ios=480/516, merge=0/0, ticks=12727/62119, in_queue=74847, util=92.64% 00:12:28.944 sdcp: ios=481/531, merge=0/0, ticks=13698/62872, in_queue=76571, util=92.92% 00:12:28.944 sdcs: ios=480/520, merge=0/0, ticks=13575/61856, in_queue=75431, util=93.06% 00:12:28.944 sdbx: ios=480/484, merge=0/0, ticks=11183/63009, in_queue=74193, util=93.18% 00:12:28.944 sdcb: ios=481/537, merge=0/0, ticks=12095/63965, in_queue=76061, util=93.27% 00:12:28.944 sdcd: ios=480/490, merge=0/0, ticks=11951/63977, in_queue=75929, util=94.21% 00:12:28.944 sdci: ios=481/516, merge=0/0, ticks=13210/61979, in_queue=75189, util=94.22% 00:12:28.944 sdcl: ios=481/509, merge=0/0, ticks=11692/63846, in_queue=75539, util=94.23% 00:12:28.944 sdco: ios=320/435, merge=0/0, ticks=6820/70826, in_queue=77646, util=94.96% 00:12:28.944 sdcq: ios=481/527, merge=0/0, ticks=12279/63373, in_queue=75653, util=95.12% 00:12:28.944 sdct: ios=355/480, merge=0/0, ticks=6278/70755, in_queue=77033, util=94.98% 00:12:28.944 sdcu: ios=480/535, merge=0/0, ticks=16135/59543, in_queue=75678, util=95.69% 00:12:28.944 sdcv: ios=320/471, merge=0/0, ticks=5794/71403, in_queue=77197, util=95.99% 00:12:28.944 sda: ios=320/436, merge=0/0, ticks=9146/67825, in_queue=76971, util=96.15% 00:12:28.944 sdc: ios=481/525, merge=0/0, ticks=12020/63554, in_queue=75575, util=96.43% 00:12:28.944 sde: ios=471/480, merge=0/0, ticks=14487/61919, in_queue=76406, util=96.90% 00:12:28.944 sdg: ios=482/529, merge=0/0, ticks=11275/66243, in_queue=77518, util=97.39% 00:12:28.944 sdk: ios=320/403, merge=0/0, ticks=10616/66723, in_queue=77339, util=97.14% 00:12:28.944 sdn: ios=481/541, merge=0/0, ticks=8412/68387, in_queue=76800, util=97.53% 00:12:28.944 sdp: ios=480/517, merge=0/0, ticks=13225/62051, in_queue=75276, util=98.35% 00:12:28.944 sds: ios=480/515, merge=0/0, ticks=13233/62562, in_queue=75795, util=98.19% 00:12:28.944 sdw: ios=481/538, merge=0/0, ticks=12255/64698, in_queue=76953, util=98.56% 00:12:28.944 sdaa: ios=442/480, merge=0/0, ticks=13327/60537, in_queue=73865, util=98.72% 00:12:28.944 [2024-07-23 02:07:37.445989] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:28.944 [2024-07-23 02:07:37.448504] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:28.944 [2024-07-23 02:07:37.451529] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:28.944 [2024-07-23 02:07:37.453732] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:28.944 [2024-07-23 02:07:37.456045] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:28.944 [2024-07-23 02:07:37.458274] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:28.944 [2024-07-23 02:07:37.460390] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:28.944 [2024-07-23 02:07:37.463844] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:28.944 02:07:37 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@78 -- # timing_exit fio 00:12:28.944 02:07:37 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:28.944 02:07:37 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@10 -- # set +x 00:12:28.944 [2024-07-23 02:07:37.468753] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:28.944 [2024-07-23 02:07:37.472476] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:28.944 [2024-07-23 02:07:37.477013] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:28.944 [2024-07-23 02:07:37.479274] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:28.944 [2024-07-23 02:07:37.483011] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:28.944 [2024-07-23 02:07:37.487611] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:28.944 [2024-07-23 02:07:37.490806] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:28.944 [2024-07-23 02:07:37.492881] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:28.944 [2024-07-23 02:07:37.495003] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:28.944 [2024-07-23 02:07:37.497334] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:28.944 [2024-07-23 02:07:37.500173] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:28.944 02:07:37 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@80 -- # rm -f ./local-job0-0-verify.state 00:12:28.944 [2024-07-23 02:07:37.502539] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:28.944 02:07:37 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@81 -- # trap - SIGINT SIGTERM EXIT 00:12:28.944 02:07:37 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@83 -- # rm -f 00:12:28.944 [2024-07-23 02:07:37.505048] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:28.944 02:07:37 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@84 -- # iscsicleanup 00:12:28.944 02:07:37 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@980 -- # echo 'Cleaning up iSCSI connection' 00:12:28.944 Cleaning up iSCSI connection 00:12:28.944 02:07:37 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@981 -- # iscsiadm -m node --logout 00:12:28.944 [2024-07-23 02:07:37.507244] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:28.944 [2024-07-23 02:07:37.509345] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:28.944 [2024-07-23 02:07:37.511438] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:28.944 [2024-07-23 02:07:37.513743] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:28.944 [2024-07-23 02:07:37.515851] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:28.944 [2024-07-23 02:07:37.518238] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:28.944 [2024-07-23 02:07:37.520372] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:28.944 [2024-07-23 02:07:37.523608] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:28.944 [2024-07-23 02:07:37.530047] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:28.944 [2024-07-23 02:07:37.534394] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:28.944 [2024-07-23 02:07:37.537938] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:28.944 [2024-07-23 02:07:37.540399] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:28.944 [2024-07-23 02:07:37.544790] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:28.944 [2024-07-23 02:07:37.547176] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:28.944 [2024-07-23 02:07:37.550047] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:28.944 [2024-07-23 02:07:37.552079] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:28.944 [2024-07-23 02:07:37.554154] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:28.944 [2024-07-23 02:07:37.556193] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:28.944 [2024-07-23 02:07:37.558178] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:28.944 [2024-07-23 02:07:37.560165] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:28.944 [2024-07-23 02:07:37.562682] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:28.944 [2024-07-23 02:07:37.564627] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:28.944 [2024-07-23 02:07:37.603901] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:28.944 [2024-07-23 02:07:37.605945] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:28.944 [2024-07-23 02:07:37.608291] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:28.944 [2024-07-23 02:07:37.610648] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:28.944 [2024-07-23 02:07:37.612471] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:28.944 [2024-07-23 02:07:37.614480] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:28.944 [2024-07-23 02:07:37.616400] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:28.944 [2024-07-23 02:07:37.618350] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:28.944 [2024-07-23 02:07:37.620478] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:28.944 [2024-07-23 02:07:37.622530] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:28.944 [2024-07-23 02:07:37.624681] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:28.944 [2024-07-23 02:07:37.633067] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:28.944 [2024-07-23 02:07:37.635412] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:28.944 [2024-07-23 02:07:37.637390] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:28.944 [2024-07-23 02:07:37.639402] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:28.944 [2024-07-23 02:07:37.641385] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:28.944 [2024-07-23 02:07:37.645581] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:28.944 [2024-07-23 02:07:37.651509] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:28.944 [2024-07-23 02:07:37.655851] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:28.944 [2024-07-23 02:07:37.661077] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:28.944 [2024-07-23 02:07:37.665686] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:28.944 [2024-07-23 02:07:37.669473] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:28.944 [2024-07-23 02:07:37.671671] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:28.944 [2024-07-23 02:07:37.674364] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:28.944 [2024-07-23 02:07:37.676649] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:28.944 [2024-07-23 02:07:37.680630] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:28.944 [2024-07-23 02:07:37.685623] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:29.203 [2024-07-23 02:07:37.723527] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:29.203 [2024-07-23 02:07:37.726109] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:29.203 Logging out of session [sid: 10, target: iqn.2016-06.io.spdk:Target2, portal: 10.0.0.1,3260] 00:12:29.203 Logging out of session [sid: 11, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] 00:12:29.203 Logging out of session [sid: 12, target: iqn.2016-06.io.spdk:Target4, portal: 10.0.0.1,3260] 00:12:29.203 Logging out of session [sid: 13, target: iqn.2016-06.io.spdk:Target5, portal: 10.0.0.1,3260] 00:12:29.203 Logging out of session [sid: 14, target: iqn.2016-06.io.spdk:Target6, portal: 10.0.0.1,3260] 00:12:29.203 Logging out of session [sid: 15, target: iqn.2016-06.io.spdk:Target7, portal: 10.0.0.1,3260] 00:12:29.203 Logging out of session [sid: 16, target: iqn.2016-06.io.spdk:Target8, portal: 10.0.0.1,3260] 00:12:29.203 Logging out of session [sid: 17, target: iqn.2016-06.io.spdk:Target9, portal: 10.0.0.1,3260] 00:12:29.203 Logging out of session [sid: 18, target: iqn.2016-06.io.spdk:Target10, portal: 10.0.0.1,3260] 00:12:29.203 Logging out of session [sid: 9, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] 00:12:29.203 Logout of [sid: 10, target: iqn.2016-06.io.spdk:Target2, portal: 10.0.0.1,3260] successful. 00:12:29.203 Logout of [sid: 11, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] successful. 00:12:29.203 Logout of [sid: 12, target: iqn.2016-06.io.spdk:Target4, portal: 10.0.0.1,3260] successful. 00:12:29.203 Logout of [sid: 13, target: iqn.2016-06.io.spdk:Target5, portal: 10.0.0.1,3260] successful. 00:12:29.203 Logout of [sid: 14, target: iqn.2016-06.io.spdk:Target6, portal: 10.0.0.1,3260] successful. 00:12:29.203 Logout of [sid: 15, target: iqn.2016-06.io.spdk:Target7, portal: 10.0.0.1,3260] successful. 00:12:29.203 Logout of [sid: 16, target: iqn.2016-06.io.spdk:Target8, portal: 10.0.0.1,3260] successful. 00:12:29.203 Logout of [sid: 17, target: iqn.2016-06.io.spdk:Target9, portal: 10.0.0.1,3260] successful. 00:12:29.203 Logout of [sid: 18, target: iqn.2016-06.io.spdk:Target10, portal: 10.0.0.1,3260] successful. 00:12:29.203 Logout of [sid: 9, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] successful. 00:12:29.203 02:07:37 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@982 -- # iscsiadm -m node -o delete 00:12:29.203 02:07:37 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@983 -- # rm -rf 00:12:29.203 02:07:37 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@85 -- # killprocess 68002 00:12:29.203 02:07:37 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@948 -- # '[' -z 68002 ']' 00:12:29.203 02:07:37 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@952 -- # kill -0 68002 00:12:29.203 02:07:37 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@953 -- # uname 00:12:29.204 02:07:37 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:29.204 02:07:37 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 68002 00:12:29.204 02:07:37 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:29.204 02:07:37 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:29.204 02:07:37 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@966 -- # echo 'killing process with pid 68002' 00:12:29.204 killing process with pid 68002 00:12:29.204 02:07:37 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@967 -- # kill 68002 00:12:29.204 02:07:37 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@972 -- # wait 68002 00:12:35.768 02:07:43 iscsi_tgt.iscsi_tgt_iscsi_lvol -- lvol/iscsi_lvol.sh@86 -- # iscsitestfini 00:12:35.768 02:07:43 iscsi_tgt.iscsi_tgt_iscsi_lvol -- iscsi_tgt/common.sh@131 -- # '[' '' == iso ']' 00:12:35.768 00:12:35.768 real 1m2.047s 00:12:35.768 user 4m13.189s 00:12:35.768 sys 0m23.912s 00:12:35.768 02:07:43 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:35.768 02:07:43 iscsi_tgt.iscsi_tgt_iscsi_lvol -- common/autotest_common.sh@10 -- # set +x 00:12:35.768 ************************************ 00:12:35.768 END TEST iscsi_tgt_iscsi_lvol 00:12:35.768 ************************************ 00:12:35.768 02:07:43 iscsi_tgt -- common/autotest_common.sh@1142 -- # return 0 00:12:35.768 02:07:43 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@37 -- # run_test iscsi_tgt_fio /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/fio/fio.sh 00:12:35.768 02:07:43 iscsi_tgt -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:12:35.768 02:07:43 iscsi_tgt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:35.768 02:07:43 iscsi_tgt -- common/autotest_common.sh@10 -- # set +x 00:12:35.768 ************************************ 00:12:35.768 START TEST iscsi_tgt_fio 00:12:35.768 ************************************ 00:12:35.768 02:07:43 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/fio/fio.sh 00:12:35.768 * Looking for test storage... 00:12:35.768 * Found test storage at /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/fio 00:12:35.768 02:07:43 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:12:35.768 02:07:43 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:12:35.768 02:07:43 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:12:35.768 02:07:43 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:12:35.768 02:07:43 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:12:35.768 02:07:43 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:12:35.768 02:07:43 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:12:35.768 02:07:43 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:12:35.768 02:07:43 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:12:35.768 02:07:43 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:12:35.768 02:07:43 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:12:35.768 02:07:43 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:12:35.768 02:07:43 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:12:35.768 02:07:43 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:12:35.768 02:07:43 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:12:35.768 02:07:43 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:12:35.768 02:07:43 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:12:35.768 02:07:43 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:12:35.768 02:07:43 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:12:35.768 02:07:43 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:12:35.768 02:07:43 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@11 -- # iscsitestinit 00:12:35.768 02:07:43 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@109 -- # '[' '' == iso ']' 00:12:35.768 02:07:43 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@48 -- # '[' -z 10.0.0.1 ']' 00:12:35.768 02:07:43 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@53 -- # '[' -z 10.0.0.2 ']' 00:12:35.768 02:07:43 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@58 -- # MALLOC_BDEV_SIZE=64 00:12:35.768 02:07:43 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@59 -- # MALLOC_BLOCK_SIZE=4096 00:12:35.768 02:07:43 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@60 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:35.768 02:07:43 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@61 -- # fio_py=/home/vagrant/spdk_repo/spdk/scripts/fio-wrapper 00:12:35.768 02:07:43 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@63 -- # timing_enter start_iscsi_tgt 00:12:35.768 02:07:43 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:35.768 02:07:43 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@10 -- # set +x 00:12:35.768 02:07:43 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@66 -- # pid=72449 00:12:35.768 02:07:43 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@67 -- # echo 'Process pid: 72449' 00:12:35.768 Process pid: 72449 00:12:35.768 02:07:43 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@69 -- # trap 'killprocess $pid; exit 1' SIGINT SIGTERM EXIT 00:12:35.768 02:07:43 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@71 -- # waitforlisten 72449 00:12:35.768 02:07:43 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@829 -- # '[' -z 72449 ']' 00:12:35.768 02:07:43 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:35.768 02:07:43 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:35.768 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:35.768 02:07:43 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@65 -- # ip netns exec spdk_iscsi_ns /home/vagrant/spdk_repo/spdk/build/bin/iscsi_tgt --wait-for-rpc 00:12:35.768 02:07:43 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:35.768 02:07:43 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:35.768 02:07:43 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@10 -- # set +x 00:12:35.768 [2024-07-23 02:07:43.608197] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:12:35.768 [2024-07-23 02:07:43.608448] [ DPDK EAL parameters: iscsi --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72449 ] 00:12:35.768 [2024-07-23 02:07:43.783107] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:35.769 [2024-07-23 02:07:43.978140] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:35.769 02:07:44 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:35.769 02:07:44 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@862 -- # return 0 00:12:35.769 02:07:44 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:12:36.704 iscsi_tgt is listening. Running tests... 00:12:36.704 02:07:45 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@75 -- # echo 'iscsi_tgt is listening. Running tests...' 00:12:36.704 02:07:45 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@77 -- # timing_exit start_iscsi_tgt 00:12:36.704 02:07:45 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:36.704 02:07:45 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@10 -- # set +x 00:12:36.704 02:07:45 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_portal_group 1 10.0.0.1:3260 00:12:36.963 02:07:45 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_initiator_group 2 ANY 10.0.0.2/32 00:12:36.963 02:07:45 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 4096 00:12:37.531 02:07:46 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@82 -- # malloc_bdevs='Malloc0 ' 00:12:37.531 02:07:46 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 4096 00:12:37.791 02:07:46 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@83 -- # malloc_bdevs+=Malloc1 00:12:37.791 02:07:46 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:12:37.791 02:07:46 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 1024 512 00:12:39.168 02:07:47 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@85 -- # bdev=Malloc2 00:12:39.168 02:07:47 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target3 Target3_alias 'raid0:0 Malloc2:1' 1:2 64 -d 00:12:39.168 02:07:47 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@91 -- # sleep 1 00:12:40.546 02:07:48 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@93 -- # iscsiadm -m discovery -t sendtargets -p 10.0.0.1:3260 00:12:40.546 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target3 00:12:40.546 02:07:48 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@94 -- # iscsiadm -m node --login -p 10.0.0.1:3260 00:12:40.546 [2024-07-23 02:07:48.941767] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:40.546 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] 00:12:40.546 Login to [iface: default, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] successful. 00:12:40.546 02:07:48 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@95 -- # waitforiscsidevices 2 00:12:40.546 02:07:48 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@116 -- # local num=2 00:12:40.546 02:07:48 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@118 -- # (( i = 1 )) 00:12:40.546 02:07:48 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@118 -- # (( i <= 20 )) 00:12:40.546 02:07:48 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@119 -- # iscsiadm -m session -P 3 00:12:40.546 02:07:48 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@119 -- # grep -c 'Attached scsi disk sd[a-z]*' 00:12:40.546 [2024-07-23 02:07:48.957581] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:40.546 02:07:48 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@119 -- # n=2 00:12:40.546 02:07:48 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@120 -- # '[' 2 -ne 2 ']' 00:12:40.546 02:07:48 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@123 -- # return 0 00:12:40.546 02:07:48 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@97 -- # trap 'iscsicleanup; killprocess $pid; iscsitestfini; delete_tmp_files; exit 1' SIGINT SIGTERM EXIT 00:12:40.546 02:07:48 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p iscsi -i 4096 -d 1 -t randrw -r 1 -v 00:12:40.546 [global] 00:12:40.546 thread=1 00:12:40.546 invalidate=1 00:12:40.546 rw=randrw 00:12:40.546 time_based=1 00:12:40.546 runtime=1 00:12:40.546 ioengine=libaio 00:12:40.546 direct=1 00:12:40.546 bs=4096 00:12:40.546 iodepth=1 00:12:40.546 norandommap=0 00:12:40.546 numjobs=1 00:12:40.546 00:12:40.546 verify_dump=1 00:12:40.546 verify_backlog=512 00:12:40.546 verify_state_save=0 00:12:40.546 do_verify=1 00:12:40.546 verify=crc32c-intel 00:12:40.546 [job0] 00:12:40.546 filename=/dev/sda 00:12:40.546 [job1] 00:12:40.546 filename=/dev/sdb 00:12:40.546 queue_depth set to 113 (sda) 00:12:40.546 queue_depth set to 113 (sdb) 00:12:40.546 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:40.546 job1: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:40.546 fio-3.35 00:12:40.546 Starting 2 threads 00:12:40.546 [2024-07-23 02:07:49.175558] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:40.546 [2024-07-23 02:07:49.179582] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:41.922 [2024-07-23 02:07:50.289201] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:41.922 [2024-07-23 02:07:50.292909] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:41.922 00:12:41.922 job0: (groupid=0, jobs=1): err= 0: pid=72597: Tue Jul 23 02:07:50 2024 00:12:41.922 read: IOPS=3434, BW=13.4MiB/s (14.1MB/s)(13.4MiB/1000msec) 00:12:41.922 slat (usec): min=3, max=226, avg= 8.64, stdev= 8.08 00:12:41.922 clat (usec): min=50, max=927, avg=169.64, stdev=56.01 00:12:41.922 lat (usec): min=107, max=934, avg=178.28, stdev=59.16 00:12:41.922 clat percentiles (usec): 00:12:41.922 | 1.00th=[ 108], 5.00th=[ 125], 10.00th=[ 128], 20.00th=[ 133], 00:12:41.922 | 30.00th=[ 145], 40.00th=[ 147], 50.00th=[ 153], 60.00th=[ 163], 00:12:41.922 | 70.00th=[ 172], 80.00th=[ 190], 90.00th=[ 235], 95.00th=[ 281], 00:12:41.922 | 99.00th=[ 412], 99.50th=[ 445], 99.90th=[ 494], 99.95th=[ 570], 00:12:41.922 | 99.99th=[ 930] 00:12:41.922 bw ( KiB/s): min= 7728, max= 7728, per=28.51%, avg=7728.00, stdev= 0.00, samples=1 00:12:41.922 iops : min= 1932, max= 1932, avg=1932.00, stdev= 0.00, samples=1 00:12:41.922 write: IOPS=1983, BW=7932KiB/s (8122kB/s)(7932KiB/1000msec); 0 zone resets 00:12:41.922 slat (usec): min=4, max=310, avg=11.67, stdev= 9.60 00:12:41.922 clat (usec): min=97, max=1300, avg=179.89, stdev=65.95 00:12:41.922 lat (usec): min=106, max=1316, avg=191.55, stdev=70.62 00:12:41.922 clat percentiles (usec): 00:12:41.922 | 1.00th=[ 102], 5.00th=[ 120], 10.00th=[ 125], 20.00th=[ 137], 00:12:41.922 | 30.00th=[ 143], 40.00th=[ 151], 50.00th=[ 159], 60.00th=[ 172], 00:12:41.922 | 70.00th=[ 188], 80.00th=[ 212], 90.00th=[ 273], 95.00th=[ 302], 00:12:41.922 | 99.00th=[ 404], 99.50th=[ 420], 99.90th=[ 482], 99.95th=[ 1303], 00:12:41.922 | 99.99th=[ 1303] 00:12:41.922 bw ( KiB/s): min= 8175, max= 8175, per=52.52%, avg=8175.00, stdev= 0.00, samples=1 00:12:41.922 iops : min= 2043, max= 2043, avg=2043.00, stdev= 0.00, samples=1 00:12:41.922 lat (usec) : 100=0.17%, 250=89.16%, 500=10.60%, 750=0.04%, 1000=0.02% 00:12:41.922 lat (msec) : 2=0.02% 00:12:41.922 cpu : usr=2.10%, sys=6.70%, ctx=5417, majf=0, minf=7 00:12:41.922 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:41.922 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:41.922 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:41.922 issued rwts: total=3434,1983,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:41.922 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:41.922 job1: (groupid=0, jobs=1): err= 0: pid=72600: Tue Jul 23 02:07:50 2024 00:12:41.922 read: IOPS=3345, BW=13.1MiB/s (13.7MB/s)(13.1MiB/1001msec) 00:12:41.922 slat (nsec): min=3068, max=85818, avg=8012.33, stdev=6301.89 00:12:41.922 clat (usec): min=91, max=923, avg=174.55, stdev=57.48 00:12:41.922 lat (usec): min=101, max=927, avg=182.56, stdev=60.91 00:12:41.922 clat percentiles (usec): 00:12:41.922 | 1.00th=[ 123], 5.00th=[ 128], 10.00th=[ 130], 20.00th=[ 139], 00:12:41.922 | 30.00th=[ 145], 40.00th=[ 149], 50.00th=[ 157], 60.00th=[ 163], 00:12:41.922 | 70.00th=[ 176], 80.00th=[ 200], 90.00th=[ 251], 95.00th=[ 285], 00:12:41.922 | 99.00th=[ 412], 99.50th=[ 449], 99.90th=[ 474], 99.95th=[ 498], 00:12:41.922 | 99.99th=[ 922] 00:12:41.922 bw ( KiB/s): min= 7497, max= 7497, per=27.66%, avg=7497.00, stdev= 0.00, samples=1 00:12:41.923 iops : min= 1874, max= 1874, avg=1874.00, stdev= 0.00, samples=1 00:12:41.923 write: IOPS=1910, BW=7640KiB/s (7824kB/s)(7648KiB/1001msec); 0 zone resets 00:12:41.923 slat (nsec): min=4028, max=92479, avg=10944.35, stdev=8015.64 00:12:41.923 clat (usec): min=90, max=469, avg=188.06, stdev=59.90 00:12:41.923 lat (usec): min=100, max=487, avg=199.00, stdev=64.26 00:12:41.923 clat percentiles (usec): 00:12:41.923 | 1.00th=[ 110], 5.00th=[ 133], 10.00th=[ 135], 20.00th=[ 147], 00:12:41.923 | 30.00th=[ 151], 40.00th=[ 159], 50.00th=[ 167], 60.00th=[ 180], 00:12:41.923 | 70.00th=[ 198], 80.00th=[ 225], 90.00th=[ 277], 95.00th=[ 306], 00:12:41.923 | 99.00th=[ 404], 99.50th=[ 420], 99.90th=[ 461], 99.95th=[ 469], 00:12:41.923 | 99.99th=[ 469] 00:12:41.923 bw ( KiB/s): min= 8175, max= 8175, per=52.52%, avg=8175.00, stdev= 0.00, samples=1 00:12:41.923 iops : min= 2043, max= 2043, avg=2043.00, stdev= 0.00, samples=1 00:12:41.923 lat (usec) : 100=0.38%, 250=87.68%, 500=11.92%, 1000=0.02% 00:12:41.923 cpu : usr=1.90%, sys=5.90%, ctx=5261, majf=0, minf=7 00:12:41.923 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:41.923 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:41.923 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:41.923 issued rwts: total=3349,1912,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:41.923 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:41.923 00:12:41.923 Run status group 0 (all jobs): 00:12:41.923 READ: bw=26.5MiB/s (27.8MB/s), 13.1MiB/s-13.4MiB/s (13.7MB/s-14.1MB/s), io=26.5MiB (27.8MB), run=1000-1001msec 00:12:41.923 WRITE: bw=15.2MiB/s (15.9MB/s), 7640KiB/s-7932KiB/s (7824kB/s-8122kB/s), io=15.2MiB (16.0MB), run=1000-1001msec 00:12:41.923 00:12:41.923 Disk stats (read/write): 00:12:41.923 sda: ios=3149/1631, merge=0/0, ticks=539/300, in_queue=840, util=90.89% 00:12:41.923 sdb: ios=3061/1576, merge=0/0, ticks=540/301, in_queue=842, util=91.07% 00:12:41.923 02:07:50 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p iscsi -i 131072 -d 32 -t randrw -r 1 -v 00:12:41.923 [global] 00:12:41.923 thread=1 00:12:41.923 invalidate=1 00:12:41.923 rw=randrw 00:12:41.923 time_based=1 00:12:41.923 runtime=1 00:12:41.923 ioengine=libaio 00:12:41.923 direct=1 00:12:41.923 bs=131072 00:12:41.923 iodepth=32 00:12:41.923 norandommap=0 00:12:41.923 numjobs=1 00:12:41.923 00:12:41.923 verify_dump=1 00:12:41.923 verify_backlog=512 00:12:41.923 verify_state_save=0 00:12:41.923 do_verify=1 00:12:41.923 verify=crc32c-intel 00:12:41.923 [job0] 00:12:41.923 filename=/dev/sda 00:12:41.923 [job1] 00:12:41.923 filename=/dev/sdb 00:12:41.923 queue_depth set to 113 (sda) 00:12:41.923 queue_depth set to 113 (sdb) 00:12:41.923 job0: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=32 00:12:41.923 job1: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=32 00:12:41.923 fio-3.35 00:12:41.923 Starting 2 threads 00:12:41.923 [2024-07-23 02:07:50.503043] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:41.923 [2024-07-23 02:07:50.506536] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:42.858 [2024-07-23 02:07:51.587485] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:43.117 [2024-07-23 02:07:51.637369] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:43.117 00:12:43.117 job0: (groupid=0, jobs=1): err= 0: pid=72669: Tue Jul 23 02:07:51 2024 00:12:43.117 read: IOPS=1502, BW=188MiB/s (197MB/s)(190MiB/1009msec) 00:12:43.117 slat (usec): min=4, max=1131, avg=23.51, stdev=55.66 00:12:43.117 clat (usec): min=1316, max=26580, avg=5072.54, stdev=3093.44 00:12:43.117 lat (usec): min=1344, max=26593, avg=5096.05, stdev=3096.95 00:12:43.117 clat percentiles (usec): 00:12:43.117 | 1.00th=[ 1713], 5.00th=[ 1909], 10.00th=[ 2008], 20.00th=[ 2180], 00:12:43.117 | 30.00th=[ 2343], 40.00th=[ 2606], 50.00th=[ 4424], 60.00th=[ 6521], 00:12:43.117 | 70.00th=[ 7570], 80.00th=[ 8094], 90.00th=[ 8455], 95.00th=[ 8979], 00:12:43.117 | 99.00th=[12780], 99.50th=[15533], 99.90th=[26608], 99.95th=[26608], 00:12:43.117 | 99.99th=[26608] 00:12:43.117 bw ( KiB/s): min=74752, max=119296, per=32.97%, avg=97024.00, stdev=31497.36, samples=2 00:12:43.117 iops : min= 584, max= 932, avg=758.00, stdev=246.07, samples=2 00:12:43.117 write: IOPS=825, BW=103MiB/s (108MB/s)(97.2MiB/943msec); 0 zone resets 00:12:43.117 slat (usec): min=26, max=197, avg=71.69, stdev=26.40 00:12:43.117 clat (usec): min=6728, max=69370, avg=31326.98, stdev=5050.66 00:12:43.117 lat (usec): min=6808, max=69413, avg=31398.67, stdev=5047.67 00:12:43.117 clat percentiles (usec): 00:12:43.117 | 1.00th=[20579], 5.00th=[27395], 10.00th=[28443], 20.00th=[29492], 00:12:43.117 | 30.00th=[30016], 40.00th=[30540], 50.00th=[30802], 60.00th=[31327], 00:12:43.117 | 70.00th=[31851], 80.00th=[32375], 90.00th=[32900], 95.00th=[34866], 00:12:43.117 | 99.00th=[56886], 99.50th=[62129], 99.90th=[69731], 99.95th=[69731], 00:12:43.117 | 99.99th=[69731] 00:12:43.117 bw ( KiB/s): min=78848, max=120320, per=47.27%, avg=99584.00, stdev=29325.13, samples=2 00:12:43.117 iops : min= 616, max= 940, avg=778.00, stdev=229.10, samples=2 00:12:43.117 lat (msec) : 2=6.23%, 4=25.59%, 10=32.65%, 20=1.74%, 50=33.00% 00:12:43.117 lat (msec) : 100=0.78% 00:12:43.117 cpu : usr=6.75%, sys=4.56%, ctx=1994, majf=0, minf=7 00:12:43.117 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.7%, 16=1.4%, 32=97.3%, >=64=0.0% 00:12:43.117 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:43.117 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0% 00:12:43.117 issued rwts: total=1516,778,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:43.117 latency : target=0, window=0, percentile=100.00%, depth=32 00:12:43.117 job1: (groupid=0, jobs=1): err= 0: pid=72671: Tue Jul 23 02:07:51 2024 00:12:43.117 read: IOPS=808, BW=101MiB/s (106MB/s)(103MiB/1017msec) 00:12:43.117 slat (nsec): min=10502, max=79843, avg=29498.14, stdev=11511.18 00:12:43.117 clat (usec): min=1491, max=24651, avg=3124.78, stdev=2708.15 00:12:43.117 lat (usec): min=1515, max=24682, avg=3154.28, stdev=2707.24 00:12:43.117 clat percentiles (usec): 00:12:43.117 | 1.00th=[ 1631], 5.00th=[ 1811], 10.00th=[ 1876], 20.00th=[ 1991], 00:12:43.117 | 30.00th=[ 2057], 40.00th=[ 2147], 50.00th=[ 2212], 60.00th=[ 2311], 00:12:43.117 | 70.00th=[ 2409], 80.00th=[ 2638], 90.00th=[ 6849], 95.00th=[ 9503], 00:12:43.117 | 99.00th=[12911], 99.50th=[18220], 99.90th=[24773], 99.95th=[24773], 00:12:43.117 | 99.99th=[24773] 00:12:43.117 bw ( KiB/s): min=94464, max=114432, per=35.49%, avg=104448.00, stdev=14119.51, samples=2 00:12:43.117 iops : min= 738, max= 894, avg=816.00, stdev=110.31, samples=2 00:12:43.117 write: IOPS=881, BW=110MiB/s (115MB/s)(112MiB/1017msec); 0 zone resets 00:12:43.117 slat (usec): min=46, max=184, avg=99.13, stdev=17.49 00:12:43.117 clat (usec): min=1777, max=75017, avg=33249.35, stdev=9612.66 00:12:43.117 lat (usec): min=1860, max=75125, avg=33348.48, stdev=9613.34 00:12:43.117 clat percentiles (usec): 00:12:43.117 | 1.00th=[10290], 5.00th=[27132], 10.00th=[28443], 20.00th=[29492], 00:12:43.117 | 30.00th=[30016], 40.00th=[30540], 50.00th=[31065], 60.00th=[31589], 00:12:43.117 | 70.00th=[32113], 80.00th=[32637], 90.00th=[41157], 95.00th=[56886], 00:12:43.117 | 99.00th=[72877], 99.50th=[72877], 99.90th=[74974], 99.95th=[74974], 00:12:43.117 | 99.99th=[74974] 00:12:43.117 bw ( KiB/s): min=100608, max=122368, per=52.92%, avg=111488.00, stdev=15386.64, samples=2 00:12:43.117 iops : min= 786, max= 956, avg=871.00, stdev=120.21, samples=2 00:12:43.117 lat (msec) : 2=10.07%, 4=31.49%, 10=4.83%, 20=2.10%, 50=47.38% 00:12:43.117 lat (msec) : 100=4.13% 00:12:43.117 cpu : usr=8.07%, sys=5.91%, ctx=1367, majf=0, minf=7 00:12:43.117 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.5%, 16=0.9%, 32=98.2%, >=64=0.0% 00:12:43.117 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:43.117 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0% 00:12:43.117 issued rwts: total=822,896,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:43.117 latency : target=0, window=0, percentile=100.00%, depth=32 00:12:43.117 00:12:43.117 Run status group 0 (all jobs): 00:12:43.117 READ: bw=287MiB/s (301MB/s), 101MiB/s-188MiB/s (106MB/s-197MB/s), io=292MiB (306MB), run=1009-1017msec 00:12:43.117 WRITE: bw=206MiB/s (216MB/s), 103MiB/s-110MiB/s (108MB/s-115MB/s), io=209MiB (219MB), run=943-1017msec 00:12:43.117 00:12:43.117 Disk stats (read/write): 00:12:43.117 sda: ios=1275/719, merge=0/0, ticks=5287/22103, in_queue=27390, util=90.09% 00:12:43.117 sdb: ios=799/777, merge=0/0, ticks=2186/25416, in_queue=27601, util=90.57% 00:12:43.117 02:07:51 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@101 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p iscsi -i 524288 -d 128 -t randrw -r 1 -v 00:12:43.117 [global] 00:12:43.117 thread=1 00:12:43.117 invalidate=1 00:12:43.117 rw=randrw 00:12:43.117 time_based=1 00:12:43.117 runtime=1 00:12:43.117 ioengine=libaio 00:12:43.117 direct=1 00:12:43.117 bs=524288 00:12:43.117 iodepth=128 00:12:43.117 norandommap=0 00:12:43.117 numjobs=1 00:12:43.117 00:12:43.117 verify_dump=1 00:12:43.117 verify_backlog=512 00:12:43.117 verify_state_save=0 00:12:43.117 do_verify=1 00:12:43.117 verify=crc32c-intel 00:12:43.117 [job0] 00:12:43.117 filename=/dev/sda 00:12:43.117 [job1] 00:12:43.117 filename=/dev/sdb 00:12:43.117 queue_depth set to 113 (sda) 00:12:43.117 queue_depth set to 113 (sdb) 00:12:43.117 job0: (g=0): rw=randrw, bs=(R) 512KiB-512KiB, (W) 512KiB-512KiB, (T) 512KiB-512KiB, ioengine=libaio, iodepth=128 00:12:43.117 job1: (g=0): rw=randrw, bs=(R) 512KiB-512KiB, (W) 512KiB-512KiB, (T) 512KiB-512KiB, ioengine=libaio, iodepth=128 00:12:43.117 fio-3.35 00:12:43.117 Starting 2 threads 00:12:43.117 [2024-07-23 02:07:51.851925] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:43.117 [2024-07-23 02:07:51.855387] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:44.494 [2024-07-23 02:07:53.136191] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:44.494 [2024-07-23 02:07:53.141686] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:44.494 00:12:44.494 job0: (groupid=0, jobs=1): err= 0: pid=72735: Tue Jul 23 02:07:53 2024 00:12:44.494 read: IOPS=209, BW=105MiB/s (110MB/s)(116MiB/1109msec) 00:12:44.494 slat (usec): min=18, max=18356, avg=1835.78, stdev=3451.39 00:12:44.494 clat (msec): min=108, max=396, avg=255.49, stdev=44.79 00:12:44.494 lat (msec): min=108, max=396, avg=257.32, stdev=44.87 00:12:44.494 clat percentiles (msec): 00:12:44.494 | 1.00th=[ 114], 5.00th=[ 157], 10.00th=[ 209], 20.00th=[ 232], 00:12:44.494 | 30.00th=[ 247], 40.00th=[ 253], 50.00th=[ 264], 60.00th=[ 268], 00:12:44.494 | 70.00th=[ 275], 80.00th=[ 279], 90.00th=[ 305], 95.00th=[ 326], 00:12:44.494 | 99.00th=[ 359], 99.50th=[ 359], 99.90th=[ 397], 99.95th=[ 397], 00:12:44.494 | 99.99th=[ 397] 00:12:44.494 bw ( KiB/s): min=63488, max=119808, per=37.92%, avg=91648.00, stdev=39824.25, samples=2 00:12:44.494 iops : min= 124, max= 234, avg=179.00, stdev=77.78, samples=2 00:12:44.494 write: IOPS=231, BW=116MiB/s (121MB/s)(129MiB/1109msec); 0 zone resets 00:12:44.494 slat (usec): min=148, max=44195, avg=2251.71, stdev=4188.07 00:12:44.494 clat (msec): min=101, max=405, avg=276.25, stdev=54.18 00:12:44.494 lat (msec): min=108, max=405, avg=278.50, stdev=54.14 00:12:44.494 clat percentiles (msec): 00:12:44.494 | 1.00th=[ 113], 5.00th=[ 148], 10.00th=[ 192], 20.00th=[ 253], 00:12:44.494 | 30.00th=[ 268], 40.00th=[ 275], 50.00th=[ 284], 60.00th=[ 296], 00:12:44.494 | 70.00th=[ 305], 80.00th=[ 313], 90.00th=[ 330], 95.00th=[ 355], 00:12:44.494 | 99.00th=[ 397], 99.50th=[ 397], 99.90th=[ 405], 99.95th=[ 405], 00:12:44.494 | 99.99th=[ 405] 00:12:44.494 bw ( KiB/s): min=64512, max=121856, per=35.20%, avg=93184.00, stdev=40548.33, samples=2 00:12:44.494 iops : min= 126, max= 238, avg=182.00, stdev=79.20, samples=2 00:12:44.494 lat (msec) : 250=28.43%, 500=71.57% 00:12:44.494 cpu : usr=4.96%, sys=1.17%, ctx=469, majf=0, minf=9 00:12:44.494 IO depths : 1=0.2%, 2=0.4%, 4=0.8%, 8=1.6%, 16=3.3%, 32=6.5%, >=64=87.1% 00:12:44.494 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:44.494 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:12:44.495 issued rwts: total=232,257,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:44.495 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:44.495 job1: (groupid=0, jobs=1): err= 0: pid=72736: Tue Jul 23 02:07:53 2024 00:12:44.495 read: IOPS=263, BW=132MiB/s (138MB/s)(146MiB/1110msec) 00:12:44.495 slat (usec): min=10, max=17815, avg=1616.95, stdev=3310.00 00:12:44.495 clat (msec): min=78, max=344, avg=210.62, stdev=49.89 00:12:44.495 lat (msec): min=78, max=344, avg=212.24, stdev=50.31 00:12:44.495 clat percentiles (msec): 00:12:44.495 | 1.00th=[ 82], 5.00th=[ 100], 10.00th=[ 157], 20.00th=[ 178], 00:12:44.495 | 30.00th=[ 201], 40.00th=[ 213], 50.00th=[ 220], 60.00th=[ 226], 00:12:44.495 | 70.00th=[ 232], 80.00th=[ 239], 90.00th=[ 253], 95.00th=[ 284], 00:12:44.495 | 99.00th=[ 342], 99.50th=[ 347], 99.90th=[ 347], 99.95th=[ 347], 00:12:44.495 | 99.99th=[ 347] 00:12:44.495 bw ( KiB/s): min=112640, max=123656, per=48.88%, avg=118148.00, stdev=7789.49, samples=2 00:12:44.495 iops : min= 220, max= 241, avg=230.50, stdev=14.85, samples=2 00:12:44.495 write: IOPS=285, BW=143MiB/s (150MB/s)(159MiB/1110msec); 0 zone resets 00:12:44.495 slat (usec): min=92, max=16441, avg=1674.36, stdev=2996.29 00:12:44.495 clat (msec): min=104, max=394, avg=241.30, stdev=53.46 00:12:44.495 lat (msec): min=120, max=394, avg=242.97, stdev=53.83 00:12:44.495 clat percentiles (msec): 00:12:44.495 | 1.00th=[ 122], 5.00th=[ 131], 10.00th=[ 150], 20.00th=[ 205], 00:12:44.495 | 30.00th=[ 234], 40.00th=[ 245], 50.00th=[ 253], 60.00th=[ 259], 00:12:44.495 | 70.00th=[ 268], 80.00th=[ 275], 90.00th=[ 279], 95.00th=[ 330], 00:12:44.495 | 99.00th=[ 380], 99.50th=[ 388], 99.90th=[ 393], 99.95th=[ 393], 00:12:44.495 | 99.99th=[ 393] 00:12:44.495 bw ( KiB/s): min=120832, max=134898, per=48.29%, avg=127865.00, stdev=9946.16, samples=2 00:12:44.495 iops : min= 236, max= 263, avg=249.50, stdev=19.09, samples=2 00:12:44.495 lat (msec) : 100=2.96%, 250=64.37%, 500=32.68% 00:12:44.495 cpu : usr=5.68%, sys=1.44%, ctx=535, majf=0, minf=5 00:12:44.495 IO depths : 1=0.2%, 2=0.3%, 4=0.7%, 8=1.3%, 16=2.6%, 32=5.3%, >=64=89.7% 00:12:44.495 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:44.495 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:12:44.495 issued rwts: total=292,317,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:44.495 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:44.495 00:12:44.495 Run status group 0 (all jobs): 00:12:44.495 READ: bw=236MiB/s (248MB/s), 105MiB/s-132MiB/s (110MB/s-138MB/s), io=262MiB (275MB), run=1109-1110msec 00:12:44.495 WRITE: bw=259MiB/s (271MB/s), 116MiB/s-143MiB/s (121MB/s-150MB/s), io=287MiB (301MB), run=1109-1110msec 00:12:44.495 00:12:44.495 Disk stats (read/write): 00:12:44.495 sda: ios=277/240, merge=0/0, ticks=23823/32303, in_queue=56127, util=79.98% 00:12:44.495 sdb: ios=340/306, merge=0/0, ticks=22992/32909, in_queue=55902, util=84.09% 00:12:44.495 02:07:53 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p iscsi -i 1048576 -d 1024 -t read -r 1 -n 4 00:12:44.495 [global] 00:12:44.495 thread=1 00:12:44.495 invalidate=1 00:12:44.495 rw=read 00:12:44.495 time_based=1 00:12:44.495 runtime=1 00:12:44.495 ioengine=libaio 00:12:44.495 direct=1 00:12:44.495 bs=1048576 00:12:44.495 iodepth=1024 00:12:44.495 norandommap=1 00:12:44.495 numjobs=4 00:12:44.495 00:12:44.495 [job0] 00:12:44.495 filename=/dev/sda 00:12:44.495 [job1] 00:12:44.495 filename=/dev/sdb 00:12:44.495 queue_depth set to 113 (sda) 00:12:44.495 queue_depth set to 113 (sdb) 00:12:44.753 job0: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=1024 00:12:44.753 ... 00:12:44.753 job1: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=1024 00:12:44.753 ... 00:12:44.753 fio-3.35 00:12:44.753 Starting 8 threads 00:12:59.630 00:12:59.630 job0: (groupid=0, jobs=1): err= 0: pid=72801: Tue Jul 23 02:08:07 2024 00:12:59.630 read: IOPS=0, BW=148KiB/s (152kB/s)(2048KiB/13813msec) 00:12:59.630 slat (msec): min=834, max=3079, avg=1956.96, stdev=1587.86 00:12:59.630 clat (msec): min=9898, max=12978, avg=11438.70, stdev=2177.72 00:12:59.630 lat (msec): min=12978, max=13812, avg=13395.66, stdev=589.85 00:12:59.630 clat percentiles (msec): 00:12:59.630 | 1.00th=[ 9866], 5.00th=[ 9866], 10.00th=[ 9866], 20.00th=[ 9866], 00:12:59.630 | 30.00th=[ 9866], 40.00th=[ 9866], 50.00th=[ 9866], 60.00th=[12953], 00:12:59.630 | 70.00th=[12953], 80.00th=[12953], 90.00th=[12953], 95.00th=[12953], 00:12:59.630 | 99.00th=[12953], 99.50th=[12953], 99.90th=[12953], 99.95th=[12953], 00:12:59.630 | 99.99th=[12953] 00:12:59.630 lat (msec) : >=2000=100.00% 00:12:59.630 cpu : usr=0.00%, sys=0.01%, ctx=9, majf=0, minf=513 00:12:59.630 IO depths : 1=50.0%, 2=50.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:59.630 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:59.630 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:59.630 issued rwts: total=2,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:59.630 latency : target=0, window=0, percentile=100.00%, depth=1024 00:12:59.630 job0: (groupid=0, jobs=1): err= 0: pid=72802: Tue Jul 23 02:08:07 2024 00:12:59.630 read: IOPS=2, BW=2954KiB/s (3025kB/s)(40.0MiB/13864msec) 00:12:59.630 slat (usec): min=572, max=3908.9k, avg=98823.78, stdev=617874.27 00:12:59.630 clat (msec): min=9910, max=13862, avg=13743.65, stdev=621.81 00:12:59.630 lat (msec): min=13819, max=13863, avg=13842.47, stdev=14.01 00:12:59.630 clat percentiles (msec): 00:12:59.630 | 1.00th=[ 9866], 5.00th=[13758], 10.00th=[13758], 20.00th=[13892], 00:12:59.630 | 30.00th=[13892], 40.00th=[13892], 50.00th=[13892], 60.00th=[13892], 00:12:59.630 | 70.00th=[13892], 80.00th=[13892], 90.00th=[13892], 95.00th=[13892], 00:12:59.630 | 99.00th=[13892], 99.50th=[13892], 99.90th=[13892], 99.95th=[13892], 00:12:59.630 | 99.99th=[13892] 00:12:59.630 lat (msec) : >=2000=100.00% 00:12:59.630 cpu : usr=0.01%, sys=0.19%, ctx=36, majf=0, minf=10241 00:12:59.630 IO depths : 1=2.5%, 2=5.0%, 4=10.0%, 8=20.0%, 16=40.0%, 32=22.5%, >=64=0.0% 00:12:59.630 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:59.630 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:12:59.630 issued rwts: total=40,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:59.630 latency : target=0, window=0, percentile=100.00%, depth=1024 00:12:59.630 job0: (groupid=0, jobs=1): err= 0: pid=72803: Tue Jul 23 02:08:07 2024 00:12:59.630 read: IOPS=2, BW=2292KiB/s (2347kB/s)(31.0MiB/13852msec) 00:12:59.630 slat (usec): min=480, max=3080.4k, avg=127323.21, stdev=567971.35 00:12:59.630 clat (msec): min=9904, max=13847, avg=13678.74, stdev=717.00 00:12:59.630 lat (msec): min=12984, max=13851, avg=13806.06, stdev=152.89 00:12:59.630 clat percentiles (msec): 00:12:59.630 | 1.00th=[ 9866], 5.00th=[12953], 10.00th=[13758], 20.00th=[13758], 00:12:59.630 | 30.00th=[13758], 40.00th=[13892], 50.00th=[13892], 60.00th=[13892], 00:12:59.630 | 70.00th=[13892], 80.00th=[13892], 90.00th=[13892], 95.00th=[13892], 00:12:59.630 | 99.00th=[13892], 99.50th=[13892], 99.90th=[13892], 99.95th=[13892], 00:12:59.630 | 99.99th=[13892] 00:12:59.630 lat (msec) : >=2000=100.00% 00:12:59.630 cpu : usr=0.00%, sys=0.12%, ctx=67, majf=0, minf=7937 00:12:59.630 IO depths : 1=3.2%, 2=6.5%, 4=12.9%, 8=25.8%, 16=51.6%, 32=0.0%, >=64=0.0% 00:12:59.630 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:59.630 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:12:59.630 issued rwts: total=31,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:59.630 latency : target=0, window=0, percentile=100.00%, depth=1024 00:12:59.630 job0: (groupid=0, jobs=1): err= 0: pid=72804: Tue Jul 23 02:08:07 2024 00:12:59.630 read: IOPS=0, BW=814KiB/s (834kB/s)(11.0MiB/13838msec) 00:12:59.630 slat (usec): min=687, max=4926.6k, avg=525221.54, stdev=1480818.09 00:12:59.630 clat (msec): min=8059, max=13834, avg=13148.12, stdev=1720.58 00:12:59.630 lat (msec): min=12986, max=13837, avg=13673.34, stdev=339.55 00:12:59.630 clat percentiles (msec): 00:12:59.630 | 1.00th=[ 8087], 5.00th=[ 8087], 10.00th=[12953], 20.00th=[12953], 00:12:59.630 | 30.00th=[13758], 40.00th=[13758], 50.00th=[13758], 60.00th=[13758], 00:12:59.630 | 70.00th=[13758], 80.00th=[13758], 90.00th=[13892], 95.00th=[13892], 00:12:59.630 | 99.00th=[13892], 99.50th=[13892], 99.90th=[13892], 99.95th=[13892], 00:12:59.630 | 99.99th=[13892] 00:12:59.630 lat (msec) : >=2000=100.00% 00:12:59.630 cpu : usr=0.00%, sys=0.05%, ctx=23, majf=0, minf=2817 00:12:59.630 IO depths : 1=9.1%, 2=18.2%, 4=36.4%, 8=36.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:59.630 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:59.630 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=100.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:59.630 issued rwts: total=11,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:59.630 latency : target=0, window=0, percentile=100.00%, depth=1024 00:12:59.630 job1: (groupid=0, jobs=1): err= 0: pid=72805: Tue Jul 23 02:08:07 2024 00:12:59.630 read: IOPS=2, BW=2433KiB/s (2491kB/s)(33.0MiB/13890msec) 00:12:59.630 slat (usec): min=502, max=4927.3k, avg=175468.86, stdev=864950.18 00:12:59.630 clat (msec): min=8099, max=13884, avg=13665.08, stdev=1009.86 00:12:59.630 lat (msec): min=13026, max=13889, avg=13840.55, stdev=146.57 00:12:59.630 clat percentiles (msec): 00:12:59.630 | 1.00th=[ 8087], 5.00th=[13087], 10.00th=[13892], 20.00th=[13892], 00:12:59.630 | 30.00th=[13892], 40.00th=[13892], 50.00th=[13892], 60.00th=[13892], 00:12:59.630 | 70.00th=[13892], 80.00th=[13892], 90.00th=[13892], 95.00th=[13892], 00:12:59.630 | 99.00th=[13892], 99.50th=[13892], 99.90th=[13892], 99.95th=[13892], 00:12:59.630 | 99.99th=[13892] 00:12:59.630 lat (msec) : >=2000=100.00% 00:12:59.630 cpu : usr=0.01%, sys=0.14%, ctx=44, majf=0, minf=8449 00:12:59.630 IO depths : 1=3.0%, 2=6.1%, 4=12.1%, 8=24.2%, 16=48.5%, 32=6.1%, >=64=0.0% 00:12:59.630 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:59.630 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:12:59.630 issued rwts: total=33,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:59.630 latency : target=0, window=0, percentile=100.00%, depth=1024 00:12:59.630 job1: (groupid=0, jobs=1): err= 0: pid=72806: Tue Jul 23 02:08:07 2024 00:12:59.630 read: IOPS=0, BW=666KiB/s (681kB/s)(9216KiB/13848msec) 00:12:59.630 slat (usec): min=757, max=3903.3k, avg=434513.59, stdev=1300800.35 00:12:59.630 clat (msec): min=9936, max=13845, avg=13408.48, stdev=1302.03 00:12:59.630 lat (msec): min=13839, max=13847, avg=13842.99, stdev= 2.37 00:12:59.630 clat percentiles (msec): 00:12:59.630 | 1.00th=[10000], 5.00th=[10000], 10.00th=[10000], 20.00th=[13892], 00:12:59.630 | 30.00th=[13892], 40.00th=[13892], 50.00th=[13892], 60.00th=[13892], 00:12:59.630 | 70.00th=[13892], 80.00th=[13892], 90.00th=[13892], 95.00th=[13892], 00:12:59.630 | 99.00th=[13892], 99.50th=[13892], 99.90th=[13892], 99.95th=[13892], 00:12:59.630 | 99.99th=[13892] 00:12:59.630 lat (msec) : >=2000=100.00% 00:12:59.630 cpu : usr=0.00%, sys=0.05%, ctx=21, majf=0, minf=2305 00:12:59.630 IO depths : 1=11.1%, 2=22.2%, 4=44.4%, 8=22.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:59.630 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:59.630 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=100.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:59.630 issued rwts: total=9,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:59.630 latency : target=0, window=0, percentile=100.00%, depth=1024 00:12:59.630 job1: (groupid=0, jobs=1): err= 0: pid=72807: Tue Jul 23 02:08:07 2024 00:12:59.630 read: IOPS=1, BW=1182KiB/s (1210kB/s)(16.0MiB/13867msec) 00:12:59.630 slat (usec): min=600, max=3903.4k, avg=245174.68, stdev=975537.72 00:12:59.630 clat (msec): min=9943, max=13865, avg=13610.29, stdev=977.75 00:12:59.630 lat (msec): min=13847, max=13866, avg=13855.46, stdev= 5.83 00:12:59.630 clat percentiles (msec): 00:12:59.630 | 1.00th=[10000], 5.00th=[10000], 10.00th=[13892], 20.00th=[13892], 00:12:59.630 | 30.00th=[13892], 40.00th=[13892], 50.00th=[13892], 60.00th=[13892], 00:12:59.630 | 70.00th=[13892], 80.00th=[13892], 90.00th=[13892], 95.00th=[13892], 00:12:59.630 | 99.00th=[13892], 99.50th=[13892], 99.90th=[13892], 99.95th=[13892], 00:12:59.630 | 99.99th=[13892] 00:12:59.630 lat (msec) : >=2000=100.00% 00:12:59.630 cpu : usr=0.01%, sys=0.06%, ctx=28, majf=0, minf=4097 00:12:59.630 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:12:59.630 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:59.630 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=100.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:59.630 issued rwts: total=16,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:59.630 latency : target=0, window=0, percentile=100.00%, depth=1024 00:12:59.630 job1: (groupid=0, jobs=1): err= 0: pid=72808: Tue Jul 23 02:08:07 2024 00:12:59.630 read: IOPS=0, BW=296KiB/s (303kB/s)(4096KiB/13824msec) 00:12:59.630 slat (usec): min=513, max=7600.2k, avg=1900524.59, stdev=3799759.35 00:12:59.630 clat (msec): min=6221, max=13823, avg=11922.22, stdev=3800.40 00:12:59.630 lat (usec): min=13822k, max=13824k, avg=13822745.26, stdev=870.11 00:12:59.630 clat percentiles (msec): 00:12:59.630 | 1.00th=[ 6208], 5.00th=[ 6208], 10.00th=[ 6208], 20.00th=[ 6208], 00:12:59.630 | 30.00th=[13758], 40.00th=[13758], 50.00th=[13758], 60.00th=[13758], 00:12:59.630 | 70.00th=[13758], 80.00th=[13758], 90.00th=[13758], 95.00th=[13758], 00:12:59.630 | 99.00th=[13758], 99.50th=[13758], 99.90th=[13758], 99.95th=[13758], 00:12:59.630 | 99.99th=[13758] 00:12:59.630 lat (msec) : >=2000=100.00% 00:12:59.630 cpu : usr=0.00%, sys=0.01%, ctx=11, majf=0, minf=1025 00:12:59.630 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:59.630 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:59.630 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:59.630 issued rwts: total=4,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:59.630 latency : target=0, window=0, percentile=100.00%, depth=1024 00:12:59.630 00:12:59.630 Run status group 0 (all jobs): 00:12:59.630 READ: bw=10.5MiB/s (11.0MB/s), 148KiB/s-2954KiB/s (152kB/s-3025kB/s), io=146MiB (153MB), run=13813-13890msec 00:12:59.630 00:12:59.630 Disk stats (read/write): 00:12:59.630 sda: ios=63/0, merge=0/0, ticks=343083/0, in_queue=343083, util=97.61% 00:12:59.630 sdb: ios=47/0, merge=0/0, ticks=542903/0, in_queue=542903, util=97.52% 00:12:59.630 02:08:07 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@104 -- # '[' 1 -eq 1 ']' 00:12:59.630 02:08:07 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p iscsi -i 4096 -d 1 -t write -r 300 -v 00:12:59.630 [global] 00:12:59.630 thread=1 00:12:59.630 invalidate=1 00:12:59.630 rw=write 00:12:59.630 time_based=1 00:12:59.630 runtime=300 00:12:59.630 ioengine=libaio 00:12:59.630 direct=1 00:12:59.630 bs=4096 00:12:59.630 iodepth=1 00:12:59.630 norandommap=0 00:12:59.630 numjobs=1 00:12:59.630 00:12:59.630 verify_dump=1 00:12:59.630 verify_backlog=512 00:12:59.630 verify_state_save=0 00:12:59.630 do_verify=1 00:12:59.630 verify=crc32c-intel 00:12:59.630 [job0] 00:12:59.630 filename=/dev/sda 00:12:59.630 [job1] 00:12:59.630 filename=/dev/sdb 00:12:59.630 queue_depth set to 113 (sda) 00:12:59.630 queue_depth set to 113 (sdb) 00:12:59.630 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:59.630 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:59.630 fio-3.35 00:12:59.630 Starting 2 threads 00:12:59.630 [2024-07-23 02:08:07.642192] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:12:59.630 [2024-07-23 02:08:07.646457] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:09.663 [2024-07-23 02:08:17.807223] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:19.634 [2024-07-23 02:08:28.052782] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:31.835 [2024-07-23 02:08:38.631481] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:41.818 [2024-07-23 02:08:49.001207] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:13:51.790 [2024-07-23 02:08:59.239807] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:01.763 [2024-07-23 02:09:09.451589] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:11.807 [2024-07-23 02:09:19.682761] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:21.779 [2024-07-23 02:09:29.975443] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:22.712 [2024-07-23 02:09:31.162998] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:32.685 [2024-07-23 02:09:40.086816] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:42.656 [2024-07-23 02:09:49.579740] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:14:50.768 [2024-07-23 02:09:59.165362] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:15:00.740 [2024-07-23 02:10:08.699096] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:15:10.753 [2024-07-23 02:10:18.356944] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:15:20.723 [2024-07-23 02:10:27.917370] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:15:28.834 [2024-07-23 02:10:37.558222] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:15:38.804 [2024-07-23 02:10:47.388249] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:15:40.708 [2024-07-23 02:10:49.330162] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:15:50.682 [2024-07-23 02:10:57.912952] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:00.665 [2024-07-23 02:11:07.936570] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:10.634 [2024-07-23 02:11:18.465523] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:20.612 [2024-07-23 02:11:29.244561] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:32.808 [2024-07-23 02:11:40.279806] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:42.769 [2024-07-23 02:11:51.282682] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:16:54.968 [2024-07-23 02:12:02.309257] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:17:01.524 [2024-07-23 02:12:09.027628] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:17:04.810 [2024-07-23 02:12:13.348938] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:17:17.049 [2024-07-23 02:12:24.485162] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:17:27.028 [2024-07-23 02:12:35.406209] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:17:39.232 [2024-07-23 02:12:46.421009] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:17:49.213 [2024-07-23 02:12:57.405843] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:17:59.194 [2024-07-23 02:13:07.756190] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:17:59.194 [2024-07-23 02:13:07.760949] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:17:59.194 00:17:59.194 job0: (groupid=0, jobs=1): err= 0: pid=72981: Tue Jul 23 02:13:07 2024 00:17:59.194 read: IOPS=3160, BW=12.3MiB/s (12.9MB/s)(3704MiB/299999msec) 00:17:59.194 slat (usec): min=2, max=257, avg= 6.78, stdev= 2.97 00:17:59.194 clat (nsec): min=1281, max=3731.6k, avg=141490.51, stdev=24029.21 00:17:59.194 lat (usec): min=94, max=3737, avg=148.27, stdev=24.19 00:17:59.194 clat percentiles (usec): 00:17:59.194 | 1.00th=[ 114], 5.00th=[ 119], 10.00th=[ 120], 20.00th=[ 125], 00:17:59.194 | 30.00th=[ 127], 40.00th=[ 131], 50.00th=[ 137], 60.00th=[ 141], 00:17:59.194 | 70.00th=[ 149], 80.00th=[ 159], 90.00th=[ 174], 95.00th=[ 184], 00:17:59.194 | 99.00th=[ 208], 99.50th=[ 221], 99.90th=[ 262], 99.95th=[ 293], 00:17:59.195 | 99.99th=[ 424] 00:17:59.195 write: IOPS=3161, BW=12.3MiB/s (12.9MB/s)(3705MiB/299999msec); 0 zone resets 00:17:59.195 slat (usec): min=3, max=720, avg= 8.44, stdev= 4.25 00:17:59.195 clat (nsec): min=1257, max=3911.2k, avg=156800.81, stdev=35758.08 00:17:59.195 lat (usec): min=96, max=3921, avg=165.24, stdev=36.21 00:17:59.195 clat percentiles (usec): 00:17:59.195 | 1.00th=[ 97], 5.00th=[ 109], 10.00th=[ 121], 20.00th=[ 129], 00:17:59.195 | 30.00th=[ 135], 40.00th=[ 143], 50.00th=[ 155], 60.00th=[ 159], 00:17:59.195 | 70.00th=[ 172], 80.00th=[ 184], 90.00th=[ 202], 95.00th=[ 215], 00:17:59.195 | 99.00th=[ 243], 99.50th=[ 253], 99.90th=[ 289], 99.95th=[ 322], 00:17:59.195 | 99.99th=[ 660] 00:17:59.195 bw ( KiB/s): min= 9768, max=16000, per=49.00%, avg=12657.21, stdev=1001.32, samples=599 00:17:59.195 iops : min= 2442, max= 4000, avg=3164.25, stdev=250.37, samples=599 00:17:59.195 lat (usec) : 2=0.01%, 4=0.01%, 10=0.01%, 20=0.01%, 50=0.01% 00:17:59.195 lat (usec) : 100=1.76%, 250=97.85%, 500=0.36%, 750=0.01%, 1000=0.01% 00:17:59.195 lat (msec) : 2=0.01%, 4=0.01% 00:17:59.195 cpu : usr=2.46%, sys=5.04%, ctx=1905082, majf=0, minf=1 00:17:59.195 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:59.195 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:59.195 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:59.195 issued rwts: total=948224,948494,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:59.195 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:59.195 job1: (groupid=0, jobs=1): err= 0: pid=72982: Tue Jul 23 02:13:07 2024 00:17:59.195 read: IOPS=3295, BW=12.9MiB/s (13.5MB/s)(3862MiB/300000msec) 00:17:59.195 slat (usec): min=2, max=431, avg= 5.18, stdev= 2.97 00:17:59.195 clat (nsec): min=1372, max=3046.0k, avg=134768.36, stdev=27092.06 00:17:59.195 lat (usec): min=75, max=3054, avg=139.95, stdev=27.17 00:17:59.195 clat percentiles (usec): 00:17:59.195 | 1.00th=[ 91], 5.00th=[ 94], 10.00th=[ 104], 20.00th=[ 122], 00:17:59.195 | 30.00th=[ 124], 40.00th=[ 127], 50.00th=[ 130], 60.00th=[ 137], 00:17:59.195 | 70.00th=[ 143], 80.00th=[ 151], 90.00th=[ 165], 95.00th=[ 178], 00:17:59.195 | 99.00th=[ 212], 99.50th=[ 231], 99.90th=[ 277], 99.95th=[ 314], 00:17:59.195 | 99.99th=[ 515] 00:17:59.195 write: IOPS=3296, BW=12.9MiB/s (13.5MB/s)(3863MiB/300000msec); 0 zone resets 00:17:59.195 slat (usec): min=3, max=400, avg= 7.61, stdev= 3.98 00:17:59.195 clat (nsec): min=1197, max=3734.3k, avg=153294.52, stdev=38006.54 00:17:59.195 lat (usec): min=90, max=3747, avg=160.90, stdev=38.45 00:17:59.195 clat percentiles (usec): 00:17:59.195 | 1.00th=[ 90], 5.00th=[ 96], 10.00th=[ 108], 20.00th=[ 130], 00:17:59.195 | 30.00th=[ 133], 40.00th=[ 141], 50.00th=[ 149], 60.00th=[ 159], 00:17:59.195 | 70.00th=[ 165], 80.00th=[ 180], 90.00th=[ 200], 95.00th=[ 217], 00:17:59.195 | 99.00th=[ 255], 99.50th=[ 273], 99.90th=[ 310], 99.95th=[ 343], 00:17:59.195 | 99.99th=[ 619] 00:17:59.195 bw ( KiB/s): min=10368, max=15856, per=51.08%, avg=13196.35, stdev=1050.40, samples=599 00:17:59.195 iops : min= 2592, max= 3964, avg=3299.03, stdev=262.58, samples=599 00:17:59.195 lat (usec) : 2=0.01%, 4=0.01%, 10=0.01%, 20=0.01%, 50=0.02% 00:17:59.195 lat (usec) : 100=7.28%, 250=91.94%, 500=0.74%, 750=0.01%, 1000=0.01% 00:17:59.195 lat (msec) : 2=0.01%, 4=0.01% 00:17:59.195 cpu : usr=2.36%, sys=4.23%, ctx=1986187, majf=0, minf=2 00:17:59.195 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:59.195 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:59.195 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:59.195 issued rwts: total=988672,988996,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:59.195 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:59.195 00:17:59.195 Run status group 0 (all jobs): 00:17:59.195 READ: bw=25.2MiB/s (26.4MB/s), 12.3MiB/s-12.9MiB/s (12.9MB/s-13.5MB/s), io=7566MiB (7934MB), run=299999-300000msec 00:17:59.195 WRITE: bw=25.2MiB/s (26.5MB/s), 12.3MiB/s-12.9MiB/s (12.9MB/s-13.5MB/s), io=7568MiB (7936MB), run=299999-300000msec 00:17:59.195 00:17:59.195 Disk stats (read/write): 00:17:59.195 sda: ios=949177/948224, merge=0/0, ticks=131541/146912, in_queue=278453, util=100.00% 00:17:59.195 sdb: ios=988428/988672, merge=0/0, ticks=129377/149979, in_queue=279355, util=100.00% 00:17:59.195 02:13:07 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@116 -- # fio_pid=76293 00:17:59.195 02:13:07 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p iscsi -i 1048576 -d 128 -t rw -r 10 00:17:59.195 02:13:07 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@118 -- # sleep 3 00:17:59.195 [global] 00:17:59.195 thread=1 00:17:59.195 invalidate=1 00:17:59.195 rw=rw 00:17:59.195 time_based=1 00:17:59.195 runtime=10 00:17:59.195 ioengine=libaio 00:17:59.195 direct=1 00:17:59.195 bs=1048576 00:17:59.195 iodepth=128 00:17:59.195 norandommap=1 00:17:59.195 numjobs=1 00:17:59.195 00:17:59.195 [job0] 00:17:59.195 filename=/dev/sda 00:17:59.195 [job1] 00:17:59.195 filename=/dev/sdb 00:17:59.195 queue_depth set to 113 (sda) 00:17:59.195 queue_depth set to 113 (sdb) 00:17:59.195 job0: (g=0): rw=rw, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:17:59.195 job1: (g=0): rw=rw, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:17:59.195 fio-3.35 00:17:59.195 Starting 2 threads 00:17:59.454 [2024-07-23 02:13:07.972763] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:17:59.454 [2024-07-23 02:13:07.974696] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:18:02.741 02:13:10 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:18:02.741 [2024-07-23 02:13:11.040397] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (raid0) received event(SPDK_BDEV_EVENT_REMOVE) 00:18:02.741 [2024-07-23 02:13:11.042254] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=bc8 00:18:02.741 [2024-07-23 02:13:11.044894] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=bc8 00:18:02.741 [2024-07-23 02:13:11.046716] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=bc8 00:18:02.741 [2024-07-23 02:13:11.048668] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=bc8 00:18:02.741 [2024-07-23 02:13:11.050383] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=bc8 00:18:02.741 [2024-07-23 02:13:11.052689] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=bc8 00:18:02.741 [2024-07-23 02:13:11.054900] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=bc8 00:18:02.741 [2024-07-23 02:13:11.056744] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=bc8 00:18:02.741 [2024-07-23 02:13:11.058382] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=bc8 00:18:02.741 [2024-07-23 02:13:11.061966] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=bca 00:18:02.741 [2024-07-23 02:13:11.063218] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=bca 00:18:02.741 02:13:11 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@124 -- # for malloc_bdev in $malloc_bdevs 00:18:02.741 [2024-07-23 02:13:11.064682] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=bca 00:18:02.741 02:13:11 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@125 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:18:02.741 [2024-07-23 02:13:11.066347] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=bca 00:18:02.741 [2024-07-23 02:13:11.067968] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=bca 00:18:02.741 [2024-07-23 02:13:11.069355] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=bca 00:18:02.741 [2024-07-23 02:13:11.070822] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=bca 00:18:02.741 [2024-07-23 02:13:11.072631] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=bca 00:18:02.741 [2024-07-23 02:13:11.072778] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=bca 00:18:02.741 [2024-07-23 02:13:11.072882] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=bca 00:18:02.741 [2024-07-23 02:13:11.072975] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=bca 00:18:02.741 [2024-07-23 02:13:11.073056] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=bca 00:18:02.741 [2024-07-23 02:13:11.073135] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=bca 00:18:02.741 [2024-07-23 02:13:11.073229] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=bca 00:18:02.741 [2024-07-23 02:13:11.114139] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=bca 00:18:02.741 [2024-07-23 02:13:11.114249] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=bca 00:18:02.741 [2024-07-23 02:13:11.114327] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=bcb 00:18:02.741 [2024-07-23 02:13:11.114411] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=bcb 00:18:02.741 [2024-07-23 02:13:11.119082] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=bcb 00:18:02.741 [2024-07-23 02:13:11.119352] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=bcb 00:18:02.741 [2024-07-23 02:13:11.122067] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=bcb 00:18:02.741 [2024-07-23 02:13:11.122248] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=bcb 00:18:02.741 [2024-07-23 02:13:11.124857] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=bcb 00:18:02.741 [2024-07-23 02:13:11.126010] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=bcb 00:18:02.741 [2024-07-23 02:13:11.126208] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=bcb 00:18:02.741 [2024-07-23 02:13:11.126361] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=bcb 00:18:02.741 [2024-07-23 02:13:11.126568] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=bcb 00:18:02.741 [2024-07-23 02:13:11.130716] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=bcb 00:18:02.741 [2024-07-23 02:13:11.130894] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=bcb 00:18:02.741 [2024-07-23 02:13:11.133452] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=bcb 00:18:02.741 [2024-07-23 02:13:11.134685] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=bcb 00:18:02.741 [2024-07-23 02:13:11.134854] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=bcb 00:18:02.741 02:13:11 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@124 -- # for malloc_bdev in $malloc_bdevs 00:18:02.741 02:13:11 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@125 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:18:03.000 fio: io_u error on file /dev/sda: Input/output error: write offset=113246208, buflen=1048576 00:18:03.000 fio: io_u error on file /dev/sda: Input/output error: write offset=116391936, buflen=1048576 00:18:03.259 02:13:11 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@129 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:18:03.518 fio: io_u error on file /dev/sda: Input/output error: write offset=117440512, buflen=1048576 00:18:03.518 fio: io_u error on file /dev/sda: Input/output error: write offset=118489088, buflen=1048576 00:18:03.518 fio: io_u error on file /dev/sda: Input/output error: write offset=119537664, buflen=1048576 00:18:03.518 fio: io_u error on file /dev/sda: Input/output error: write offset=120586240, buflen=1048576 00:18:03.518 fio: io_u error on file /dev/sda: Input/output error: write offset=121634816, buflen=1048576 00:18:03.518 fio: io_u error on file /dev/sda: Input/output error: write offset=122683392, buflen=1048576 00:18:03.518 fio: io_u error on file /dev/sda: Input/output error: write offset=123731968, buflen=1048576 00:18:03.518 fio: io_u error on file /dev/sda: Input/output error: write offset=114294784, buflen=1048576 00:18:03.518 fio: io_u error on file /dev/sda: Input/output error: write offset=115343360, buflen=1048576 00:18:03.518 fio: io_u error on file /dev/sda: Input/output error: read offset=92274688, buflen=1048576 00:18:03.518 fio: io_u error on file /dev/sda: Input/output error: read offset=93323264, buflen=1048576 00:18:03.518 fio: io_u error on file /dev/sda: Input/output error: write offset=124780544, buflen=1048576 00:18:03.518 fio: io_u error on file /dev/sda: Input/output error: write offset=125829120, buflen=1048576 00:18:03.518 fio: io_u error on file /dev/sda: Input/output error: write offset=126877696, buflen=1048576 00:18:03.518 fio: io_u error on file /dev/sda: Input/output error: read offset=94371840, buflen=1048576 00:18:03.518 fio: io_u error on file /dev/sda: Input/output error: read offset=95420416, buflen=1048576 00:18:03.518 fio: io_u error on file /dev/sda: Input/output error: write offset=127926272, buflen=1048576 00:18:03.518 fio: io_u error on file /dev/sda: Input/output error: write offset=128974848, buflen=1048576 00:18:03.518 fio: io_u error on file /dev/sda: Input/output error: write offset=130023424, buflen=1048576 00:18:03.518 fio: io_u error on file /dev/sda: Input/output error: write offset=131072000, buflen=1048576 00:18:03.519 fio: io_u error on file /dev/sda: Input/output error: write offset=132120576, buflen=1048576 00:18:03.519 fio: io_u error on file /dev/sda: Input/output error: read offset=96468992, buflen=1048576 00:18:03.519 fio: io_u error on file /dev/sda: Input/output error: write offset=133169152, buflen=1048576 00:18:03.519 fio: io_u error on file /dev/sda: Input/output error: read offset=97517568, buflen=1048576 00:18:03.519 fio: io_u error on file /dev/sda: Input/output error: write offset=0, buflen=1048576 00:18:03.519 fio: io_u error on file /dev/sda: Input/output error: write offset=1048576, buflen=1048576 00:18:03.519 fio: io_u error on file /dev/sda: Input/output error: write offset=2097152, buflen=1048576 00:18:03.519 fio: io_u error on file /dev/sda: Input/output error: read offset=98566144, buflen=1048576 00:18:03.519 fio: io_u error on file /dev/sda: Input/output error: read offset=99614720, buflen=1048576 00:18:03.519 fio: io_u error on file /dev/sda: Input/output error: write offset=3145728, buflen=1048576 00:18:03.519 fio: io_u error on file /dev/sda: Input/output error: read offset=100663296, buflen=1048576 00:18:03.519 fio: io_u error on file /dev/sda: Input/output error: read offset=101711872, buflen=1048576 00:18:03.519 fio: io_u error on file /dev/sda: Input/output error: write offset=24117248, buflen=1048576 00:18:03.519 fio: io_u error on file /dev/sda: Input/output error: write offset=4194304, buflen=1048576 00:18:03.519 fio: io_u error on file /dev/sda: Input/output error: write offset=25165824, buflen=1048576 00:18:03.519 fio: io_u error on file /dev/sda: Input/output error: write offset=5242880, buflen=1048576 00:18:03.519 fio: io_u error on file /dev/sda: Input/output error: read offset=125829120, buflen=1048576 00:18:03.519 fio: io_u error on file /dev/sda: Input/output error: read offset=102760448, buflen=1048576 00:18:03.519 fio: io_u error on file /dev/sda: Input/output error: read offset=126877696, buflen=1048576 00:18:03.519 fio: io_u error on file /dev/sda: Input/output error: write offset=6291456, buflen=1048576 00:18:03.519 fio: io_u error on file /dev/sda: Input/output error: write offset=26214400, buflen=1048576 00:18:03.519 fio: io_u error on file /dev/sda: Input/output error: read offset=103809024, buflen=1048576 00:18:03.519 fio: io_u error on file /dev/sda: Input/output error: write offset=7340032, buflen=1048576 00:18:03.519 fio: io_u error on file /dev/sda: Input/output error: write offset=8388608, buflen=1048576 00:18:03.519 fio: io_u error on file /dev/sda: Input/output error: read offset=127926272, buflen=1048576 00:18:03.519 fio: io_u error on file /dev/sda: Input/output error: read offset=104857600, buflen=1048576 00:18:03.519 fio: io_u error on file /dev/sda: Input/output error: read offset=128974848, buflen=1048576 00:18:03.519 fio: io_u error on file /dev/sda: Input/output error: write offset=27262976, buflen=1048576 00:18:03.519 fio: io_u error on file /dev/sda: Input/output error: read offset=105906176, buflen=1048576 00:18:03.519 fio: io_u error on file /dev/sda: Input/output error: read offset=130023424, buflen=1048576 00:18:03.519 fio: io_u error on file /dev/sda: Input/output error: read offset=131072000, buflen=1048576 00:18:03.519 fio: io_u error on file /dev/sda: Input/output error: write offset=9437184, buflen=1048576 00:18:03.519 fio: io_u error on file /dev/sda: Input/output error: write offset=28311552, buflen=1048576 00:18:03.519 fio: io_u error on file /dev/sda: Input/output error: read offset=106954752, buflen=1048576 00:18:03.519 fio: io_u error on file /dev/sda: Input/output error: write offset=29360128, buflen=1048576 00:18:03.519 fio: io_u error on file /dev/sda: Input/output error: read offset=108003328, buflen=1048576 00:18:03.519 fio: io_u error on file /dev/sda: Input/output error: read offset=109051904, buflen=1048576 00:18:03.519 fio: io_u error on file /dev/sda: Input/output error: read offset=110100480, buflen=1048576 00:18:03.519 fio: io_u error on file /dev/sda: Input/output error: read offset=132120576, buflen=1048576 00:18:03.519 fio: io_u error on file /dev/sda: Input/output error: write offset=10485760, buflen=1048576 00:18:03.519 fio: io_u error on file /dev/sda: Input/output error: write offset=30408704, buflen=1048576 00:18:03.519 fio: io_u error on file /dev/sda: Input/output error: read offset=111149056, buflen=1048576 00:18:03.519 fio: io_u error on file /dev/sda: Input/output error: write offset=31457280, buflen=1048576 00:18:03.519 fio: io_u error on file /dev/sda: Input/output error: read offset=112197632, buflen=1048576 00:18:03.519 fio: io_u error on file /dev/sda: Input/output error: read offset=113246208, buflen=1048576 00:18:03.519 fio: io_u error on file /dev/sda: Input/output error: write offset=32505856, buflen=1048576 00:18:03.519 fio: io_u error on file /dev/sda: Input/output error: read offset=114294784, buflen=1048576 00:18:03.519 fio: io_u error on file /dev/sda: Input/output error: read offset=133169152, buflen=1048576 00:18:03.519 fio: io_u error on file /dev/sda: Input/output error: read offset=115343360, buflen=1048576 00:18:03.519 fio: io_u error on file /dev/sda: Input/output error: write offset=33554432, buflen=1048576 00:18:03.519 fio: io_u error on file /dev/sda: Input/output error: write offset=34603008, buflen=1048576 00:18:03.519 fio: io_u error on file /dev/sda: Input/output error: write offset=35651584, buflen=1048576 00:18:03.519 fio: io_u error on file /dev/sda: Input/output error: read offset=0, buflen=1048576 00:18:03.519 fio: io_u error on file /dev/sda: Input/output error: write offset=11534336, buflen=1048576 00:18:03.519 fio: io_u error on file /dev/sda: Input/output error: write offset=12582912, buflen=1048576 00:18:03.519 fio: io_u error on file /dev/sda: Input/output error: read offset=116391936, buflen=1048576 00:18:03.519 fio: io_u error on file /dev/sda: Input/output error: write offset=36700160, buflen=1048576 00:18:03.519 fio: io_u error on file /dev/sda: Input/output error: write offset=13631488, buflen=1048576 00:18:03.519 fio: io_u error on file /dev/sda: Input/output error: write offset=37748736, buflen=1048576 00:18:03.519 fio: io_u error on file /dev/sda: Input/output error: write offset=14680064, buflen=1048576 00:18:03.519 fio: io_u error on file /dev/sda: Input/output error: read offset=1048576, buflen=1048576 00:18:03.519 fio: io_u error on file /dev/sda: Input/output error: write offset=15728640, buflen=1048576 00:18:03.519 fio: io_u error on file /dev/sda: Input/output error: read offset=117440512, buflen=1048576 00:18:03.519 fio: io_u error on file /dev/sda: Input/output error: write offset=38797312, buflen=1048576 00:18:03.519 fio: io_u error on file /dev/sda: Input/output error: read offset=118489088, buflen=1048576 00:18:03.519 fio: io_u error on file /dev/sda: Input/output error: write offset=39845888, buflen=1048576 00:18:03.519 fio: io_u error on file /dev/sda: Input/output error: read offset=2097152, buflen=1048576 00:18:03.519 fio: io_u error on file /dev/sda: Input/output error: read offset=119537664, buflen=1048576 00:18:03.519 fio: io_u error on file /dev/sda: Input/output error: write offset=40894464, buflen=1048576 00:18:03.519 fio: io_u error on file /dev/sda: Input/output error: write offset=16777216, buflen=1048576 00:18:03.519 fio: io_u error on file /dev/sda: Input/output error: write offset=41943040, buflen=1048576 00:18:03.519 fio: io_u error on file /dev/sda: Input/output error: write offset=17825792, buflen=1048576 00:18:03.519 fio: io_u error on file /dev/sda: Input/output error: write offset=42991616, buflen=1048576 00:18:03.519 fio: io_u error on file /dev/sda: Input/output error: read offset=120586240, buflen=1048576 00:18:03.519 fio: io_u error on file /dev/sda: Input/output error: read offset=3145728, buflen=1048576 00:18:03.519 fio: io_u error on file /dev/sda: Input/output error: read offset=121634816, buflen=1048576 00:18:03.519 fio: io_u error on file /dev/sda: Input/output error: write offset=44040192, buflen=1048576 00:18:03.519 fio: io_u error on file /dev/sda: Input/output error: read offset=122683392, buflen=1048576 00:18:03.519 fio: io_u error on file /dev/sda: Input/output error: read offset=4194304, buflen=1048576 00:18:03.519 fio: io_u error on file /dev/sda: Input/output error: write offset=18874368, buflen=1048576 00:18:03.519 fio: io_u error on file /dev/sda: Input/output error: read offset=5242880, buflen=1048576 00:18:03.519 fio: io_u error on file /dev/sda: Input/output error: write offset=45088768, buflen=1048576 00:18:03.519 fio: io_u error on file /dev/sda: Input/output error: write offset=19922944, buflen=1048576 00:18:03.519 fio: io_u error on file /dev/sda: Input/output error: write offset=46137344, buflen=1048576 00:18:03.519 fio: io_u error on file /dev/sda: Input/output error: write offset=20971520, buflen=1048576 00:18:03.519 fio: io_u error on file /dev/sda: Input/output error: write offset=22020096, buflen=1048576 00:18:03.519 fio: io_u error on file /dev/sda: Input/output error: write offset=47185920, buflen=1048576 00:18:03.519 fio: io_u error on file /dev/sda: Input/output error: read offset=123731968, buflen=1048576 00:18:03.519 fio: io_u error on file /dev/sda: Input/output error: read offset=6291456, buflen=1048576 00:18:03.519 fio: io_u error on file /dev/sda: Input/output error: read offset=124780544, buflen=1048576 00:18:03.519 fio: io_u error on file /dev/sda: Input/output error: read offset=7340032, buflen=1048576 00:18:03.519 fio: pid=76333, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:18:03.519 fio: io_u error on file /dev/sda: Input/output error: write offset=23068672, buflen=1048576 00:18:03.519 fio: io_u error on file /dev/sda: Input/output error: write offset=48234496, buflen=1048576 00:18:03.519 fio: io_u error on file /dev/sda: Input/output error: read offset=8388608, buflen=1048576 00:18:03.519 fio: io_u error on file /dev/sda: Input/output error: write offset=49283072, buflen=1048576 00:18:03.519 fio: io_u error on file /dev/sda: Input/output error: write offset=50331648, buflen=1048576 00:18:03.519 fio: io_u error on file /dev/sda: Input/output error: read offset=9437184, buflen=1048576 00:18:03.519 fio: io_u error on file /dev/sda: Input/output error: read offset=10485760, buflen=1048576 00:18:03.519 fio: io_u error on file /dev/sda: Input/output error: read offset=11534336, buflen=1048576 00:18:03.519 fio: io_u error on file /dev/sda: Input/output error: read offset=12582912, buflen=1048576 00:18:03.519 fio: io_u error on file /dev/sda: Input/output error: write offset=51380224, buflen=1048576 00:18:03.519 fio: io_u error on file /dev/sda: Input/output error: read offset=13631488, buflen=1048576 00:18:03.519 fio: io_u error on file /dev/sda: Input/output error: write offset=52428800, buflen=1048576 00:18:03.519 fio: io_u error on file /dev/sda: Input/output error: write offset=53477376, buflen=1048576 00:18:03.519 fio: io_u error on file /dev/sda: Input/output error: write offset=54525952, buflen=1048576 00:18:03.519 fio: io_u error on file /dev/sda: Input/output error: read offset=14680064, buflen=1048576 00:18:03.519 [2024-07-23 02:13:12.157893] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (Malloc2) received event(SPDK_BDEV_EVENT_REMOVE) 00:18:03.519 [2024-07-23 02:13:12.159358] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=c8e 00:18:03.519 [2024-07-23 02:13:12.159662] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=c8e 00:18:03.519 [2024-07-23 02:13:12.159900] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=c8e 00:18:03.519 [2024-07-23 02:13:12.161785] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=c8e 00:18:03.519 [2024-07-23 02:13:12.168660] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=c8f 00:18:03.519 [2024-07-23 02:13:12.168791] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=c8f 00:18:06.053 [2024-07-23 02:13:14.440733] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=c8f 00:18:06.053 [2024-07-23 02:13:14.441938] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=c8f 00:18:06.053 [2024-07-23 02:13:14.442884] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=c8f 00:18:06.053 [2024-07-23 02:13:14.444359] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=c8f 00:18:06.053 [2024-07-23 02:13:14.445390] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=c8f 00:18:06.053 [2024-07-23 02:13:14.446685] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=c8f 00:18:06.053 [2024-07-23 02:13:14.446805] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=c8f 00:18:06.053 [2024-07-23 02:13:14.446896] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=c8f 00:18:06.053 [2024-07-23 02:13:14.446986] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=c8f 00:18:06.053 [2024-07-23 02:13:14.447067] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=c8f 00:18:06.053 [2024-07-23 02:13:14.447156] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=c8f 00:18:06.053 [2024-07-23 02:13:14.447232] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=c8f 00:18:06.053 [2024-07-23 02:13:14.457471] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=c8f 00:18:06.053 [2024-07-23 02:13:14.457609] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=c8f 00:18:06.053 [2024-07-23 02:13:14.457696] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=c90 00:18:06.053 [2024-07-23 02:13:14.457778] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=c90 00:18:06.053 [2024-07-23 02:13:14.457868] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=c90 00:18:06.053 [2024-07-23 02:13:14.457952] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=c90 00:18:06.053 [2024-07-23 02:13:14.458912] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=c90 00:18:06.053 02:13:14 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@131 -- # fio_status=0 00:18:06.053 02:13:14 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@132 -- # wait 76293 00:18:06.053 [2024-07-23 02:13:14.465007] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=c90 00:18:06.053 [2024-07-23 02:13:14.466242] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=c90 00:18:06.053 [2024-07-23 02:13:14.468736] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=c90 00:18:06.053 [2024-07-23 02:13:14.469947] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=c90 00:18:06.053 [2024-07-23 02:13:14.471795] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=c90 00:18:06.053 [2024-07-23 02:13:14.473035] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=c90 00:18:06.053 [2024-07-23 02:13:14.474606] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=c90 00:18:06.053 [2024-07-23 02:13:14.475862] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=c90 00:18:06.053 [2024-07-23 02:13:14.477363] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=c90 00:18:06.053 [2024-07-23 02:13:14.478590] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=c90 00:18:06.053 [2024-07-23 02:13:14.480296] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=c90 00:18:06.053 [2024-07-23 02:13:14.481551] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=c91 00:18:06.053 [2024-07-23 02:13:14.483184] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=c91 00:18:06.053 [2024-07-23 02:13:14.484443] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=c91 00:18:06.053 [2024-07-23 02:13:14.485910] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=c91 00:18:06.053 [2024-07-23 02:13:14.487115] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=c91 00:18:06.053 [2024-07-23 02:13:14.488817] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=c91 00:18:06.053 [2024-07-23 02:13:14.490034] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=c91 00:18:06.053 [2024-07-23 02:13:14.491631] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=c91 00:18:06.053 [2024-07-23 02:13:14.492804] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=c91 00:18:06.053 [2024-07-23 02:13:14.494029] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=c91 00:18:06.053 [2024-07-23 02:13:14.495595] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=c91 00:18:06.053 [2024-07-23 02:13:14.496822] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=c91 00:18:06.053 [2024-07-23 02:13:14.498053] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=c91 00:18:06.053 [2024-07-23 02:13:14.499632] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=c91 00:18:06.053 [2024-07-23 02:13:14.500913] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=c91 00:18:06.053 [2024-07-23 02:13:14.502615] iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=c91 00:18:06.053 fio: io_u error on file /dev/sdb: Input/output error: read offset=455081984, buflen=1048576 00:18:06.053 fio: io_u error on file /dev/sdb: Input/output error: read offset=456130560, buflen=1048576 00:18:06.053 fio: io_u error on file /dev/sdb: Input/output error: read offset=457179136, buflen=1048576 00:18:06.053 fio: io_u error on file /dev/sdb: Input/output error: read offset=458227712, buflen=1048576 00:18:06.053 fio: io_u error on file /dev/sdb: Input/output error: read offset=459276288, buflen=1048576 00:18:06.053 fio: io_u error on file /dev/sdb: Input/output error: read offset=460324864, buflen=1048576 00:18:06.053 fio: io_u error on file /dev/sdb: Input/output error: read offset=461373440, buflen=1048576 00:18:06.053 fio: io_u error on file /dev/sdb: Input/output error: write offset=497025024, buflen=1048576 00:18:06.053 fio: io_u error on file /dev/sdb: Input/output error: write offset=498073600, buflen=1048576 00:18:06.053 fio: io_u error on file /dev/sdb: Input/output error: write offset=499122176, buflen=1048576 00:18:06.053 fio: io_u error on file /dev/sdb: Input/output error: write offset=500170752, buflen=1048576 00:18:06.053 fio: io_u error on file /dev/sdb: Input/output error: write offset=501219328, buflen=1048576 00:18:06.053 fio: io_u error on file /dev/sdb: Input/output error: write offset=502267904, buflen=1048576 00:18:06.053 fio: io_u error on file /dev/sdb: Input/output error: write offset=503316480, buflen=1048576 00:18:06.053 fio: io_u error on file /dev/sdb: Input/output error: write offset=504365056, buflen=1048576 00:18:06.053 fio: io_u error on file /dev/sdb: Input/output error: write offset=505413632, buflen=1048576 00:18:06.053 fio: io_u error on file /dev/sdb: Input/output error: write offset=506462208, buflen=1048576 00:18:06.054 fio: io_u error on file /dev/sdb: Input/output error: write offset=507510784, buflen=1048576 00:18:06.054 fio: io_u error on file /dev/sdb: Input/output error: write offset=508559360, buflen=1048576 00:18:06.054 fio: io_u error on file /dev/sdb: Input/output error: write offset=509607936, buflen=1048576 00:18:06.054 fio: io_u error on file /dev/sdb: Input/output error: write offset=510656512, buflen=1048576 00:18:06.054 fio: io_u error on file /dev/sdb: Input/output error: write offset=511705088, buflen=1048576 00:18:06.054 fio: io_u error on file /dev/sdb: Input/output error: write offset=512753664, buflen=1048576 00:18:06.054 fio: io_u error on file /dev/sdb: Input/output error: write offset=513802240, buflen=1048576 00:18:06.054 fio: io_u error on file /dev/sdb: Input/output error: write offset=514850816, buflen=1048576 00:18:06.054 fio: io_u error on file /dev/sdb: Input/output error: write offset=515899392, buflen=1048576 00:18:06.054 fio: io_u error on file /dev/sdb: Input/output error: write offset=516947968, buflen=1048576 00:18:06.054 fio: io_u error on file /dev/sdb: Input/output error: write offset=517996544, buflen=1048576 00:18:06.054 fio: io_u error on file /dev/sdb: Input/output error: write offset=519045120, buflen=1048576 00:18:06.054 fio: io_u error on file /dev/sdb: Input/output error: write offset=520093696, buflen=1048576 00:18:06.054 fio: io_u error on file /dev/sdb: Input/output error: write offset=521142272, buflen=1048576 00:18:06.054 fio: io_u error on file /dev/sdb: Input/output error: write offset=522190848, buflen=1048576 00:18:06.054 fio: io_u error on file /dev/sdb: Input/output error: write offset=492830720, buflen=1048576 00:18:06.054 fio: io_u error on file /dev/sdb: Input/output error: write offset=493879296, buflen=1048576 00:18:06.054 fio: io_u error on file /dev/sdb: Input/output error: write offset=494927872, buflen=1048576 00:18:06.054 fio: io_u error on file /dev/sdb: Input/output error: write offset=495976448, buflen=1048576 00:18:06.054 fio: io_u error on file /dev/sdb: Input/output error: write offset=523239424, buflen=1048576 00:18:06.054 fio: io_u error on file /dev/sdb: Input/output error: write offset=524288000, buflen=1048576 00:18:06.054 fio: io_u error on file /dev/sdb: Input/output error: read offset=462422016, buflen=1048576 00:18:06.054 fio: io_u error on file /dev/sdb: Input/output error: read offset=463470592, buflen=1048576 00:18:06.054 fio: io_u error on file /dev/sdb: Input/output error: read offset=464519168, buflen=1048576 00:18:06.054 fio: io_u error on file /dev/sdb: Input/output error: read offset=465567744, buflen=1048576 00:18:06.054 fio: io_u error on file /dev/sdb: Input/output error: read offset=466616320, buflen=1048576 00:18:06.054 fio: io_u error on file /dev/sdb: Input/output error: read offset=467664896, buflen=1048576 00:18:06.054 fio: io_u error on file /dev/sdb: Input/output error: write offset=525336576, buflen=1048576 00:18:06.054 fio: io_u error on file /dev/sdb: Input/output error: write offset=526385152, buflen=1048576 00:18:06.054 fio: io_u error on file /dev/sdb: Input/output error: write offset=527433728, buflen=1048576 00:18:06.054 fio: pid=76334, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:18:06.054 fio: io_u error on file /dev/sdb: Input/output error: write offset=528482304, buflen=1048576 00:18:06.054 fio: io_u error on file /dev/sdb: Input/output error: read offset=468713472, buflen=1048576 00:18:06.054 fio: io_u error on file /dev/sdb: Input/output error: read offset=469762048, buflen=1048576 00:18:06.054 fio: io_u error on file /dev/sdb: Input/output error: read offset=470810624, buflen=1048576 00:18:06.054 fio: io_u error on file /dev/sdb: Input/output error: read offset=471859200, buflen=1048576 00:18:06.054 fio: io_u error on file /dev/sdb: Input/output error: read offset=472907776, buflen=1048576 00:18:06.054 fio: io_u error on file /dev/sdb: Input/output error: write offset=529530880, buflen=1048576 00:18:06.054 fio: io_u error on file /dev/sdb: Input/output error: read offset=473956352, buflen=1048576 00:18:06.054 fio: io_u error on file /dev/sdb: Input/output error: write offset=530579456, buflen=1048576 00:18:06.054 fio: io_u error on file /dev/sdb: Input/output error: write offset=531628032, buflen=1048576 00:18:06.054 fio: io_u error on file /dev/sdb: Input/output error: read offset=475004928, buflen=1048576 00:18:06.054 fio: io_u error on file /dev/sdb: Input/output error: read offset=476053504, buflen=1048576 00:18:06.054 fio: io_u error on file /dev/sdb: Input/output error: write offset=532676608, buflen=1048576 00:18:06.054 fio: io_u error on file /dev/sdb: Input/output error: write offset=533725184, buflen=1048576 00:18:06.054 fio: io_u error on file /dev/sdb: Input/output error: read offset=477102080, buflen=1048576 00:18:06.054 fio: io_u error on file /dev/sdb: Input/output error: read offset=478150656, buflen=1048576 00:18:06.054 fio: io_u error on file /dev/sdb: Input/output error: write offset=534773760, buflen=1048576 00:18:06.054 fio: io_u error on file /dev/sdb: Input/output error: write offset=535822336, buflen=1048576 00:18:06.054 fio: io_u error on file /dev/sdb: Input/output error: write offset=536870912, buflen=1048576 00:18:06.054 fio: io_u error on file /dev/sdb: Input/output error: write offset=537919488, buflen=1048576 00:18:06.054 fio: io_u error on file /dev/sdb: Input/output error: write offset=538968064, buflen=1048576 00:18:06.054 fio: io_u error on file /dev/sdb: Input/output error: write offset=540016640, buflen=1048576 00:18:06.054 fio: io_u error on file /dev/sdb: Input/output error: write offset=541065216, buflen=1048576 00:18:06.054 fio: io_u error on file /dev/sdb: Input/output error: write offset=542113792, buflen=1048576 00:18:06.054 fio: io_u error on file /dev/sdb: Input/output error: read offset=479199232, buflen=1048576 00:18:06.054 fio: io_u error on file /dev/sdb: Input/output error: write offset=543162368, buflen=1048576 00:18:06.054 fio: io_u error on file /dev/sdb: Input/output error: read offset=480247808, buflen=1048576 00:18:06.054 fio: io_u error on file /dev/sdb: Input/output error: read offset=481296384, buflen=1048576 00:18:06.054 fio: io_u error on file /dev/sdb: Input/output error: write offset=544210944, buflen=1048576 00:18:06.054 fio: io_u error on file /dev/sdb: Input/output error: write offset=545259520, buflen=1048576 00:18:06.054 fio: io_u error on file /dev/sdb: Input/output error: read offset=482344960, buflen=1048576 00:18:06.054 fio: io_u error on file /dev/sdb: Input/output error: read offset=483393536, buflen=1048576 00:18:06.054 fio: io_u error on file /dev/sdb: Input/output error: write offset=546308096, buflen=1048576 00:18:06.054 fio: io_u error on file /dev/sdb: Input/output error: read offset=484442112, buflen=1048576 00:18:06.054 fio: io_u error on file /dev/sdb: Input/output error: write offset=547356672, buflen=1048576 00:18:06.054 fio: io_u error on file /dev/sdb: Input/output error: read offset=485490688, buflen=1048576 00:18:06.054 fio: io_u error on file /dev/sdb: Input/output error: read offset=486539264, buflen=1048576 00:18:06.054 fio: io_u error on file /dev/sdb: Input/output error: read offset=487587840, buflen=1048576 00:18:06.054 fio: io_u error on file /dev/sdb: Input/output error: read offset=488636416, buflen=1048576 00:18:06.054 fio: io_u error on file /dev/sdb: Input/output error: write offset=548405248, buflen=1048576 00:18:06.054 fio: io_u error on file /dev/sdb: Input/output error: read offset=489684992, buflen=1048576 00:18:06.054 fio: io_u error on file /dev/sdb: Input/output error: read offset=490733568, buflen=1048576 00:18:06.054 fio: io_u error on file /dev/sdb: Input/output error: write offset=549453824, buflen=1048576 00:18:06.054 fio: io_u error on file /dev/sdb: Input/output error: read offset=491782144, buflen=1048576 00:18:06.054 fio: io_u error on file /dev/sdb: Input/output error: write offset=550502400, buflen=1048576 00:18:06.054 fio: io_u error on file /dev/sdb: Input/output error: write offset=551550976, buflen=1048576 00:18:06.054 fio: io_u error on file /dev/sdb: Input/output error: read offset=492830720, buflen=1048576 00:18:06.054 fio: io_u error on file /dev/sdb: Input/output error: read offset=493879296, buflen=1048576 00:18:06.054 fio: io_u error on file /dev/sdb: Input/output error: write offset=552599552, buflen=1048576 00:18:06.054 fio: io_u error on file /dev/sdb: Input/output error: read offset=494927872, buflen=1048576 00:18:06.054 fio: io_u error on file /dev/sdb: Input/output error: read offset=495976448, buflen=1048576 00:18:06.054 fio: io_u error on file /dev/sdb: Input/output error: read offset=497025024, buflen=1048576 00:18:06.054 fio: io_u error on file /dev/sdb: Input/output error: read offset=498073600, buflen=1048576 00:18:06.054 fio: io_u error on file /dev/sdb: Input/output error: write offset=553648128, buflen=1048576 00:18:06.054 fio: io_u error on file /dev/sdb: Input/output error: write offset=554696704, buflen=1048576 00:18:06.054 fio: io_u error on file /dev/sdb: Input/output error: read offset=499122176, buflen=1048576 00:18:06.054 fio: io_u error on file /dev/sdb: Input/output error: write offset=555745280, buflen=1048576 00:18:06.054 fio: io_u error on file /dev/sdb: Input/output error: read offset=500170752, buflen=1048576 00:18:06.054 fio: io_u error on file /dev/sdb: Input/output error: read offset=501219328, buflen=1048576 00:18:06.054 fio: io_u error on file /dev/sdb: Input/output error: write offset=556793856, buflen=1048576 00:18:06.054 fio: io_u error on file /dev/sdb: Input/output error: read offset=502267904, buflen=1048576 00:18:06.054 fio: io_u error on file /dev/sdb: Input/output error: write offset=557842432, buflen=1048576 00:18:06.054 fio: io_u error on file /dev/sdb: Input/output error: write offset=558891008, buflen=1048576 00:18:06.054 fio: io_u error on file /dev/sdb: Input/output error: write offset=559939584, buflen=1048576 00:18:06.054 fio: io_u error on file /dev/sdb: Input/output error: write offset=560988160, buflen=1048576 00:18:06.054 fio: io_u error on file /dev/sdb: Input/output error: read offset=503316480, buflen=1048576 00:18:06.054 fio: io_u error on file /dev/sdb: Input/output error: write offset=562036736, buflen=1048576 00:18:06.054 fio: io_u error on file /dev/sdb: Input/output error: read offset=504365056, buflen=1048576 00:18:06.054 fio: io_u error on file /dev/sdb: Input/output error: read offset=505413632, buflen=1048576 00:18:06.054 fio: io_u error on file /dev/sdb: Input/output error: read offset=506462208, buflen=1048576 00:18:06.054 fio: io_u error on file /dev/sdb: Input/output error: write offset=563085312, buflen=1048576 00:18:06.054 fio: io_u error on file /dev/sdb: Input/output error: read offset=507510784, buflen=1048576 00:18:06.054 fio: io_u error on file /dev/sdb: Input/output error: read offset=508559360, buflen=1048576 00:18:06.054 fio: io_u error on file /dev/sdb: Input/output error: read offset=509607936, buflen=1048576 00:18:06.054 fio: io_u error on file /dev/sdb: Input/output error: write offset=564133888, buflen=1048576 00:18:06.054 fio: io_u error on file /dev/sdb: Input/output error: write offset=565182464, buflen=1048576 00:18:06.054 fio: io_u error on file /dev/sdb: Input/output error: read offset=510656512, buflen=1048576 00:18:06.054 fio: io_u error on file /dev/sdb: Input/output error: write offset=566231040, buflen=1048576 00:18:06.054 fio: io_u error on file /dev/sdb: Input/output error: read offset=511705088, buflen=1048576 00:18:06.054 fio: io_u error on file /dev/sdb: Input/output error: read offset=512753664, buflen=1048576 00:18:06.054 fio: io_u error on file /dev/sdb: Input/output error: write offset=567279616, buflen=1048576 00:18:06.054 00:18:06.054 job0: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=76333: Tue Jul 23 02:13:14 2024 00:18:06.054 read: IOPS=136, BW=123MiB/s (128MB/s)(472MiB/3853msec) 00:18:06.054 slat (usec): min=33, max=74653, avg=2687.42, stdev=6436.96 00:18:06.055 clat (msec): min=137, max=699, avg=353.58, stdev=97.65 00:18:06.055 lat (msec): min=137, max=714, avg=356.13, stdev=98.46 00:18:06.055 clat percentiles (msec): 00:18:06.055 | 1.00th=[ 146], 5.00th=[ 203], 10.00th=[ 255], 20.00th=[ 275], 00:18:06.055 | 30.00th=[ 292], 40.00th=[ 309], 50.00th=[ 363], 60.00th=[ 384], 00:18:06.055 | 70.00th=[ 401], 80.00th=[ 422], 90.00th=[ 447], 95.00th=[ 472], 00:18:06.055 | 99.00th=[ 693], 99.50th=[ 693], 99.90th=[ 701], 99.95th=[ 701], 00:18:06.055 | 99.99th=[ 701] 00:18:06.055 bw ( KiB/s): min=43008, max=204800, per=93.68%, avg=138059.14, stdev=50579.92, samples=7 00:18:06.055 iops : min= 42, max= 200, avg=134.71, stdev=49.44, samples=7 00:18:06.055 write: IOPS=146, BW=128MiB/s (134MB/s)(492MiB/3853msec); 0 zone resets 00:18:06.055 slat (usec): min=53, max=215965, avg=3420.96, stdev=11146.82 00:18:06.055 clat (msec): min=220, max=771, avg=406.13, stdev=86.42 00:18:06.055 lat (msec): min=220, max=771, avg=409.06, stdev=87.28 00:18:06.055 clat percentiles (msec): 00:18:06.055 | 1.00th=[ 222], 5.00th=[ 259], 10.00th=[ 313], 20.00th=[ 330], 00:18:06.055 | 30.00th=[ 351], 40.00th=[ 372], 50.00th=[ 409], 60.00th=[ 443], 00:18:06.055 | 70.00th=[ 464], 80.00th=[ 477], 90.00th=[ 498], 95.00th=[ 523], 00:18:06.055 | 99.00th=[ 709], 99.50th=[ 735], 99.90th=[ 768], 99.95th=[ 768], 00:18:06.055 | 99.99th=[ 768] 00:18:06.055 bw ( KiB/s): min=34816, max=196608, per=91.95%, avg=143889.57, stdev=53653.89, samples=7 00:18:06.055 iops : min= 34, max= 192, avg=140.43, stdev=52.30, samples=7 00:18:06.055 lat (msec) : 250=5.68%, 500=77.01%, 750=5.40%, 1000=0.18% 00:18:06.055 cpu : usr=1.06%, sys=1.90%, ctx=361, majf=0, minf=1 00:18:06.055 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.7%, 16=1.5%, 32=2.9%, >=64=94.2% 00:18:06.055 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:06.055 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:06.055 issued rwts: total=527,565,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:06.055 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:06.055 job1: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=76334: Tue Jul 23 02:13:14 2024 00:18:06.055 read: IOPS=77, BW=68.9MiB/s (72.3MB/s)(434MiB/6295msec) 00:18:06.055 slat (usec): min=33, max=2333.6k, avg=8602.03, stdev=105707.61 00:18:06.055 clat (msec): min=116, max=2450, avg=574.48, stdev=414.46 00:18:06.055 lat (msec): min=116, max=2456, avg=578.74, stdev=414.58 00:18:06.055 clat percentiles (msec): 00:18:06.055 | 1.00th=[ 136], 5.00th=[ 188], 10.00th=[ 213], 20.00th=[ 347], 00:18:06.055 | 30.00th=[ 422], 40.00th=[ 447], 50.00th=[ 493], 60.00th=[ 550], 00:18:06.055 | 70.00th=[ 600], 80.00th=[ 701], 90.00th=[ 793], 95.00th=[ 844], 00:18:06.055 | 99.00th=[ 2433], 99.50th=[ 2433], 99.90th=[ 2467], 99.95th=[ 2467], 00:18:06.055 | 99.99th=[ 2467] 00:18:06.055 bw ( KiB/s): min=59273, max=190464, per=82.86%, avg=122122.71, stdev=46466.40, samples=7 00:18:06.055 iops : min= 57, max= 186, avg=119.00, stdev=45.55, samples=7 00:18:06.055 write: IOPS=86, BW=74.7MiB/s (78.3MB/s)(470MiB/6295msec); 0 zone resets 00:18:06.055 slat (usec): min=50, max=239676, avg=3816.32, stdev=12151.73 00:18:06.055 clat (msec): min=217, max=2539, avg=664.26, stdev=418.22 00:18:06.055 lat (msec): min=217, max=2540, avg=668.51, stdev=418.15 00:18:06.055 clat percentiles (msec): 00:18:06.055 | 1.00th=[ 224], 5.00th=[ 257], 10.00th=[ 380], 20.00th=[ 472], 00:18:06.055 | 30.00th=[ 498], 40.00th=[ 535], 50.00th=[ 600], 60.00th=[ 625], 00:18:06.055 | 70.00th=[ 667], 80.00th=[ 760], 90.00th=[ 852], 95.00th=[ 936], 00:18:06.055 | 99.00th=[ 2534], 99.50th=[ 2534], 99.90th=[ 2534], 99.95th=[ 2534], 00:18:06.055 | 99.99th=[ 2534] 00:18:06.055 bw ( KiB/s): min=73580, max=219136, per=84.21%, avg=131782.86, stdev=47607.10, samples=7 00:18:06.055 iops : min= 71, max= 214, avg=128.43, stdev=46.66, samples=7 00:18:06.055 lat (msec) : 250=6.30%, 500=29.26%, 750=35.27%, 1000=13.18%, >=2000=3.59% 00:18:06.055 cpu : usr=0.92%, sys=1.05%, ctx=379, majf=0, minf=1 00:18:06.055 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.6%, 32=3.1%, >=64=93.9% 00:18:06.055 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:06.055 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:06.055 issued rwts: total=490,542,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:06.055 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:06.055 00:18:06.055 Run status group 0 (all jobs): 00:18:06.055 READ: bw=144MiB/s (151MB/s), 68.9MiB/s-123MiB/s (72.3MB/s-128MB/s), io=906MiB (950MB), run=3853-6295msec 00:18:06.055 WRITE: bw=153MiB/s (160MB/s), 74.7MiB/s-128MiB/s (78.3MB/s-134MB/s), io=962MiB (1009MB), run=3853-6295msec 00:18:06.055 00:18:06.055 Disk stats (read/write): 00:18:06.055 sda: ios=573/562, merge=0/0, ticks=80662/115335, in_queue=195996, util=89.62% 00:18:06.055 sdb: ios=452/497, merge=0/0, ticks=91014/129195, in_queue=220209, util=88.51% 00:18:06.055 iscsi hotplug test: fio failed as expected 00:18:06.055 Cleaning up iSCSI connection 00:18:06.055 02:13:14 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@132 -- # fio_status=2 00:18:06.055 02:13:14 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@134 -- # '[' 2 -eq 0 ']' 00:18:06.055 02:13:14 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@138 -- # echo 'iscsi hotplug test: fio failed as expected' 00:18:06.055 02:13:14 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@141 -- # iscsicleanup 00:18:06.055 02:13:14 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@980 -- # echo 'Cleaning up iSCSI connection' 00:18:06.055 02:13:14 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@981 -- # iscsiadm -m node --logout 00:18:06.055 Logging out of session [sid: 19, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] 00:18:06.055 Logout of [sid: 19, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] successful. 00:18:06.055 02:13:14 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@982 -- # iscsiadm -m node -o delete 00:18:06.055 02:13:14 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@983 -- # rm -rf 00:18:06.055 02:13:14 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@142 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_delete_target_node iqn.2016-06.io.spdk:Target3 00:18:06.314 02:13:14 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@144 -- # delete_tmp_files 00:18:06.314 02:13:14 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@14 -- # rm -f /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/fio/iscsi2.json 00:18:06.314 02:13:14 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@15 -- # rm -f ./local-job0-0-verify.state 00:18:06.314 02:13:14 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@16 -- # rm -f ./local-job1-1-verify.state 00:18:06.314 02:13:14 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:18:06.314 02:13:14 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@148 -- # killprocess 72449 00:18:06.314 02:13:14 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@948 -- # '[' -z 72449 ']' 00:18:06.314 02:13:14 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@952 -- # kill -0 72449 00:18:06.314 02:13:14 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@953 -- # uname 00:18:06.314 02:13:14 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:06.314 02:13:14 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 72449 00:18:06.314 02:13:15 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:06.314 killing process with pid 72449 00:18:06.314 02:13:15 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:06.314 02:13:15 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@966 -- # echo 'killing process with pid 72449' 00:18:06.314 02:13:15 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@967 -- # kill 72449 00:18:06.314 02:13:15 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@972 -- # wait 72449 00:18:08.218 02:13:16 iscsi_tgt.iscsi_tgt_fio -- fio/fio.sh@150 -- # iscsitestfini 00:18:08.218 02:13:16 iscsi_tgt.iscsi_tgt_fio -- iscsi_tgt/common.sh@131 -- # '[' '' == iso ']' 00:18:08.218 00:18:08.218 real 5m33.621s 00:18:08.218 user 3m33.329s 00:18:08.218 sys 1m54.369s 00:18:08.218 02:13:16 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:08.218 02:13:16 iscsi_tgt.iscsi_tgt_fio -- common/autotest_common.sh@10 -- # set +x 00:18:08.218 ************************************ 00:18:08.218 END TEST iscsi_tgt_fio 00:18:08.218 ************************************ 00:18:08.478 02:13:17 iscsi_tgt -- common/autotest_common.sh@1142 -- # return 0 00:18:08.478 02:13:17 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@38 -- # run_test iscsi_tgt_qos /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/qos/qos.sh 00:18:08.478 02:13:17 iscsi_tgt -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:18:08.478 02:13:17 iscsi_tgt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:08.478 02:13:17 iscsi_tgt -- common/autotest_common.sh@10 -- # set +x 00:18:08.478 ************************************ 00:18:08.478 START TEST iscsi_tgt_qos 00:18:08.478 ************************************ 00:18:08.478 02:13:17 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/qos/qos.sh 00:18:08.478 * Looking for test storage... 00:18:08.478 * Found test storage at /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/qos 00:18:08.478 02:13:17 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:18:08.478 02:13:17 iscsi_tgt.iscsi_tgt_qos -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:18:08.478 02:13:17 iscsi_tgt.iscsi_tgt_qos -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:18:08.478 02:13:17 iscsi_tgt.iscsi_tgt_qos -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:18:08.478 02:13:17 iscsi_tgt.iscsi_tgt_qos -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:18:08.478 02:13:17 iscsi_tgt.iscsi_tgt_qos -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:18:08.478 02:13:17 iscsi_tgt.iscsi_tgt_qos -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:18:08.478 02:13:17 iscsi_tgt.iscsi_tgt_qos -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:18:08.478 02:13:17 iscsi_tgt.iscsi_tgt_qos -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:18:08.478 02:13:17 iscsi_tgt.iscsi_tgt_qos -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:18:08.478 02:13:17 iscsi_tgt.iscsi_tgt_qos -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:18:08.478 02:13:17 iscsi_tgt.iscsi_tgt_qos -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:18:08.478 02:13:17 iscsi_tgt.iscsi_tgt_qos -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:18:08.478 02:13:17 iscsi_tgt.iscsi_tgt_qos -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:18:08.478 02:13:17 iscsi_tgt.iscsi_tgt_qos -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:18:08.478 02:13:17 iscsi_tgt.iscsi_tgt_qos -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:18:08.478 02:13:17 iscsi_tgt.iscsi_tgt_qos -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:18:08.478 02:13:17 iscsi_tgt.iscsi_tgt_qos -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:18:08.478 02:13:17 iscsi_tgt.iscsi_tgt_qos -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:18:08.478 02:13:17 iscsi_tgt.iscsi_tgt_qos -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:18:08.478 02:13:17 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@11 -- # iscsitestinit 00:18:08.478 02:13:17 iscsi_tgt.iscsi_tgt_qos -- iscsi_tgt/common.sh@109 -- # '[' '' == iso ']' 00:18:08.478 02:13:17 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@44 -- # '[' -z 10.0.0.1 ']' 00:18:08.478 02:13:17 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@49 -- # '[' -z 10.0.0.2 ']' 00:18:08.478 02:13:17 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@54 -- # MALLOC_BDEV_SIZE=64 00:18:08.478 02:13:17 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@55 -- # MALLOC_BLOCK_SIZE=512 00:18:08.478 02:13:17 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@56 -- # IOPS_RESULT= 00:18:08.478 02:13:17 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@57 -- # BANDWIDTH_RESULT= 00:18:08.478 02:13:17 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@58 -- # fio_py=/home/vagrant/spdk_repo/spdk/scripts/fio-wrapper 00:18:08.478 02:13:17 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@60 -- # timing_enter start_iscsi_tgt 00:18:08.478 02:13:17 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:08.478 02:13:17 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:18:08.478 02:13:17 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@63 -- # pid=76529 00:18:08.478 02:13:17 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@64 -- # echo 'Process pid: 76529' 00:18:08.478 Process pid: 76529 00:18:08.478 02:13:17 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@65 -- # trap 'killprocess $pid; iscsitestfini; exit 1' SIGINT SIGTERM EXIT 00:18:08.478 02:13:17 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@66 -- # waitforlisten 76529 00:18:08.478 02:13:17 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@62 -- # ip netns exec spdk_iscsi_ns /home/vagrant/spdk_repo/spdk/build/bin/iscsi_tgt 00:18:08.478 02:13:17 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@829 -- # '[' -z 76529 ']' 00:18:08.478 02:13:17 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:08.478 02:13:17 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:08.478 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:08.478 02:13:17 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:08.478 02:13:17 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:08.478 02:13:17 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:18:08.737 [2024-07-23 02:13:17.318182] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:18:08.737 [2024-07-23 02:13:17.318417] [ DPDK EAL parameters: iscsi --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76529 ] 00:18:08.737 [2024-07-23 02:13:17.496384] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:08.996 [2024-07-23 02:13:17.714252] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:09.934 02:13:18 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:09.934 iscsi_tgt is listening. Running tests... 00:18:09.934 02:13:18 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@862 -- # return 0 00:18:09.934 02:13:18 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@67 -- # echo 'iscsi_tgt is listening. Running tests...' 00:18:09.934 02:13:18 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@69 -- # timing_exit start_iscsi_tgt 00:18:09.934 02:13:18 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:09.934 02:13:18 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:18:09.934 02:13:18 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@71 -- # rpc_cmd iscsi_create_portal_group 1 10.0.0.1:3260 00:18:09.934 02:13:18 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:09.934 02:13:18 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:18:09.934 02:13:18 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:09.934 02:13:18 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@72 -- # rpc_cmd iscsi_create_initiator_group 2 ANY 10.0.0.2/32 00:18:09.934 02:13:18 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:09.934 02:13:18 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:18:09.934 02:13:18 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:09.934 02:13:18 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@73 -- # rpc_cmd bdev_malloc_create 64 512 00:18:09.934 02:13:18 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:09.934 02:13:18 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:18:09.934 Malloc0 00:18:09.934 02:13:18 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:09.934 02:13:18 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@78 -- # rpc_cmd iscsi_create_target_node Target1 Target1_alias Malloc0:0 1:2 64 -d 00:18:09.934 02:13:18 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:09.934 02:13:18 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:18:09.934 02:13:18 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:09.934 02:13:18 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@79 -- # sleep 1 00:18:10.871 02:13:19 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@81 -- # iscsiadm -m discovery -t sendtargets -p 10.0.0.1:3260 00:18:10.871 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target1 00:18:10.871 02:13:19 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@82 -- # iscsiadm -m node --login -p 10.0.0.1:3260 00:18:10.871 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] 00:18:10.871 Login to [iface: default, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] successful. 00:18:10.871 02:13:19 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@84 -- # trap 'iscsicleanup; killprocess $pid; iscsitestfini; exit 1' SIGINT SIGTERM EXIT 00:18:10.871 02:13:19 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@87 -- # run_fio Malloc0 00:18:10.871 02:13:19 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@14 -- # local bdev_name=Malloc0 00:18:10.871 02:13:19 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@15 -- # local iostats 00:18:10.871 02:13:19 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@16 -- # local start_io_count 00:18:10.871 02:13:19 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@17 -- # local start_bytes_read 00:18:10.871 02:13:19 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@18 -- # local end_io_count 00:18:10.871 02:13:19 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@19 -- # local end_bytes_read 00:18:10.871 02:13:19 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@20 -- # local run_time=5 00:18:10.871 02:13:19 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@22 -- # rpc_cmd bdev_get_iostat -b Malloc0 00:18:10.871 02:13:19 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:10.871 02:13:19 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:18:10.871 [2024-07-23 02:13:19.643897] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:18:11.130 02:13:19 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:11.130 02:13:19 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@22 -- # iostats='{ 00:18:11.130 "tick_rate": 2200000000, 00:18:11.130 "ticks": 2328102401219, 00:18:11.130 "bdevs": [ 00:18:11.130 { 00:18:11.130 "name": "Malloc0", 00:18:11.130 "bytes_read": 41472, 00:18:11.130 "num_read_ops": 4, 00:18:11.130 "bytes_written": 0, 00:18:11.130 "num_write_ops": 0, 00:18:11.130 "bytes_unmapped": 0, 00:18:11.130 "num_unmap_ops": 0, 00:18:11.130 "bytes_copied": 0, 00:18:11.130 "num_copy_ops": 0, 00:18:11.130 "read_latency_ticks": 1499704, 00:18:11.130 "max_read_latency_ticks": 596548, 00:18:11.130 "min_read_latency_ticks": 31030, 00:18:11.130 "write_latency_ticks": 0, 00:18:11.130 "max_write_latency_ticks": 0, 00:18:11.130 "min_write_latency_ticks": 0, 00:18:11.130 "unmap_latency_ticks": 0, 00:18:11.130 "max_unmap_latency_ticks": 0, 00:18:11.130 "min_unmap_latency_ticks": 0, 00:18:11.130 "copy_latency_ticks": 0, 00:18:11.130 "max_copy_latency_ticks": 0, 00:18:11.130 "min_copy_latency_ticks": 0, 00:18:11.130 "io_error": {} 00:18:11.130 } 00:18:11.130 ] 00:18:11.130 }' 00:18:11.130 02:13:19 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@23 -- # jq -r '.bdevs[0].num_read_ops' 00:18:11.130 02:13:19 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@23 -- # start_io_count=4 00:18:11.130 02:13:19 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@24 -- # jq -r '.bdevs[0].bytes_read' 00:18:11.130 02:13:19 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@24 -- # start_bytes_read=41472 00:18:11.130 02:13:19 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p iscsi -i 1024 -d 128 -t randread -r 5 00:18:11.130 [global] 00:18:11.130 thread=1 00:18:11.130 invalidate=1 00:18:11.130 rw=randread 00:18:11.130 time_based=1 00:18:11.130 runtime=5 00:18:11.130 ioengine=libaio 00:18:11.130 direct=1 00:18:11.130 bs=1024 00:18:11.130 iodepth=128 00:18:11.130 norandommap=1 00:18:11.130 numjobs=1 00:18:11.130 00:18:11.130 [job0] 00:18:11.130 filename=/dev/sda 00:18:11.130 queue_depth set to 113 (sda) 00:18:11.389 job0: (g=0): rw=randread, bs=(R) 1024B-1024B, (W) 1024B-1024B, (T) 1024B-1024B, ioengine=libaio, iodepth=128 00:18:11.389 fio-3.35 00:18:11.389 Starting 1 thread 00:18:16.663 00:18:16.664 job0: (groupid=0, jobs=1): err= 0: pid=76617: Tue Jul 23 02:13:25 2024 00:18:16.664 read: IOPS=39.7k, BW=38.8MiB/s (40.7MB/s)(194MiB/5003msec) 00:18:16.664 slat (nsec): min=1434, max=3657.9k, avg=23512.06, stdev=76751.09 00:18:16.664 clat (usec): min=1108, max=7171, avg=3195.88, stdev=232.30 00:18:16.664 lat (usec): min=1115, max=7175, avg=3219.40, stdev=222.29 00:18:16.664 clat percentiles (usec): 00:18:16.664 | 1.00th=[ 2704], 5.00th=[ 2868], 10.00th=[ 2966], 20.00th=[ 3064], 00:18:16.664 | 30.00th=[ 3097], 40.00th=[ 3130], 50.00th=[ 3163], 60.00th=[ 3228], 00:18:16.664 | 70.00th=[ 3261], 80.00th=[ 3326], 90.00th=[ 3458], 95.00th=[ 3589], 00:18:16.664 | 99.00th=[ 3851], 99.50th=[ 3949], 99.90th=[ 4752], 99.95th=[ 5932], 00:18:16.664 | 99.99th=[ 7177] 00:18:16.664 bw ( KiB/s): min=38662, max=41376, per=100.00%, avg=39821.56, stdev=969.93, samples=9 00:18:16.664 iops : min=38662, max=41376, avg=39821.56, stdev=969.93, samples=9 00:18:16.664 lat (msec) : 2=0.04%, 4=99.54%, 10=0.41% 00:18:16.664 cpu : usr=6.90%, sys=14.33%, ctx=118381, majf=0, minf=32 00:18:16.664 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:18:16.664 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:16.664 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:16.664 issued rwts: total=198845,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:16.664 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:16.664 00:18:16.664 Run status group 0 (all jobs): 00:18:16.664 READ: bw=38.8MiB/s (40.7MB/s), 38.8MiB/s-38.8MiB/s (40.7MB/s-40.7MB/s), io=194MiB (204MB), run=5003-5003msec 00:18:16.664 00:18:16.664 Disk stats (read/write): 00:18:16.664 sda: ios=194603/0, merge=0/0, ticks=532516/0, in_queue=532516, util=98.11% 00:18:16.664 02:13:25 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@28 -- # rpc_cmd bdev_get_iostat -b Malloc0 00:18:16.664 02:13:25 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:16.664 02:13:25 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:18:16.664 02:13:25 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:16.664 02:13:25 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@28 -- # iostats='{ 00:18:16.664 "tick_rate": 2200000000, 00:18:16.664 "ticks": 2340073174007, 00:18:16.664 "bdevs": [ 00:18:16.664 { 00:18:16.664 "name": "Malloc0", 00:18:16.664 "bytes_read": 204727808, 00:18:16.664 "num_read_ops": 198902, 00:18:16.664 "bytes_written": 0, 00:18:16.664 "num_write_ops": 0, 00:18:16.664 "bytes_unmapped": 0, 00:18:16.664 "num_unmap_ops": 0, 00:18:16.664 "bytes_copied": 0, 00:18:16.664 "num_copy_ops": 0, 00:18:16.664 "read_latency_ticks": 54892000083, 00:18:16.664 "max_read_latency_ticks": 596548, 00:18:16.664 "min_read_latency_ticks": 17596, 00:18:16.664 "write_latency_ticks": 0, 00:18:16.664 "max_write_latency_ticks": 0, 00:18:16.664 "min_write_latency_ticks": 0, 00:18:16.664 "unmap_latency_ticks": 0, 00:18:16.664 "max_unmap_latency_ticks": 0, 00:18:16.664 "min_unmap_latency_ticks": 0, 00:18:16.664 "copy_latency_ticks": 0, 00:18:16.664 "max_copy_latency_ticks": 0, 00:18:16.664 "min_copy_latency_ticks": 0, 00:18:16.664 "io_error": {} 00:18:16.664 } 00:18:16.664 ] 00:18:16.664 }' 00:18:16.664 02:13:25 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@29 -- # jq -r '.bdevs[0].num_read_ops' 00:18:16.664 02:13:25 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@29 -- # end_io_count=198902 00:18:16.664 02:13:25 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@30 -- # jq -r '.bdevs[0].bytes_read' 00:18:16.664 02:13:25 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@30 -- # end_bytes_read=204727808 00:18:16.664 02:13:25 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@32 -- # IOPS_RESULT=39779 00:18:16.664 02:13:25 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@33 -- # BANDWIDTH_RESULT=40937267 00:18:16.664 02:13:25 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@90 -- # IOPS_LIMIT=19889 00:18:16.664 02:13:25 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@91 -- # BANDWIDTH_LIMIT=20468633 00:18:16.664 02:13:25 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@94 -- # READ_BANDWIDTH_LIMIT=10234316 00:18:16.664 02:13:25 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@98 -- # IOPS_LIMIT=19000 00:18:16.664 02:13:25 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@99 -- # BANDWIDTH_LIMIT_MB=19 00:18:16.664 02:13:25 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@100 -- # BANDWIDTH_LIMIT=19922944 00:18:16.664 02:13:25 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@101 -- # READ_BANDWIDTH_LIMIT_MB=9 00:18:16.664 02:13:25 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@102 -- # READ_BANDWIDTH_LIMIT=9437184 00:18:16.664 02:13:25 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@105 -- # rpc_cmd bdev_set_qos_limit Malloc0 --rw_ios_per_sec 19000 00:18:16.664 02:13:25 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:16.664 02:13:25 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:18:16.664 02:13:25 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:16.664 02:13:25 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@106 -- # run_fio Malloc0 00:18:16.664 02:13:25 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@14 -- # local bdev_name=Malloc0 00:18:16.664 02:13:25 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@15 -- # local iostats 00:18:16.664 02:13:25 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@16 -- # local start_io_count 00:18:16.664 02:13:25 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@17 -- # local start_bytes_read 00:18:16.664 02:13:25 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@18 -- # local end_io_count 00:18:16.664 02:13:25 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@19 -- # local end_bytes_read 00:18:16.664 02:13:25 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@20 -- # local run_time=5 00:18:16.664 02:13:25 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@22 -- # rpc_cmd bdev_get_iostat -b Malloc0 00:18:16.664 02:13:25 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:16.664 02:13:25 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:18:16.664 02:13:25 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:16.664 02:13:25 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@22 -- # iostats='{ 00:18:16.664 "tick_rate": 2200000000, 00:18:16.664 "ticks": 2340345944571, 00:18:16.664 "bdevs": [ 00:18:16.664 { 00:18:16.664 "name": "Malloc0", 00:18:16.664 "bytes_read": 204727808, 00:18:16.664 "num_read_ops": 198902, 00:18:16.664 "bytes_written": 0, 00:18:16.664 "num_write_ops": 0, 00:18:16.664 "bytes_unmapped": 0, 00:18:16.664 "num_unmap_ops": 0, 00:18:16.664 "bytes_copied": 0, 00:18:16.664 "num_copy_ops": 0, 00:18:16.664 "read_latency_ticks": 54892000083, 00:18:16.664 "max_read_latency_ticks": 596548, 00:18:16.664 "min_read_latency_ticks": 17596, 00:18:16.664 "write_latency_ticks": 0, 00:18:16.664 "max_write_latency_ticks": 0, 00:18:16.664 "min_write_latency_ticks": 0, 00:18:16.664 "unmap_latency_ticks": 0, 00:18:16.664 "max_unmap_latency_ticks": 0, 00:18:16.664 "min_unmap_latency_ticks": 0, 00:18:16.664 "copy_latency_ticks": 0, 00:18:16.664 "max_copy_latency_ticks": 0, 00:18:16.664 "min_copy_latency_ticks": 0, 00:18:16.664 "io_error": {} 00:18:16.664 } 00:18:16.664 ] 00:18:16.664 }' 00:18:16.664 02:13:25 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@23 -- # jq -r '.bdevs[0].num_read_ops' 00:18:16.664 02:13:25 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@23 -- # start_io_count=198902 00:18:16.664 02:13:25 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@24 -- # jq -r '.bdevs[0].bytes_read' 00:18:16.664 02:13:25 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@24 -- # start_bytes_read=204727808 00:18:16.664 02:13:25 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p iscsi -i 1024 -d 128 -t randread -r 5 00:18:16.664 [global] 00:18:16.664 thread=1 00:18:16.664 invalidate=1 00:18:16.664 rw=randread 00:18:16.664 time_based=1 00:18:16.664 runtime=5 00:18:16.664 ioengine=libaio 00:18:16.664 direct=1 00:18:16.664 bs=1024 00:18:16.664 iodepth=128 00:18:16.664 norandommap=1 00:18:16.664 numjobs=1 00:18:16.664 00:18:16.664 [job0] 00:18:16.664 filename=/dev/sda 00:18:16.664 queue_depth set to 113 (sda) 00:18:16.923 job0: (g=0): rw=randread, bs=(R) 1024B-1024B, (W) 1024B-1024B, (T) 1024B-1024B, ioengine=libaio, iodepth=128 00:18:16.923 fio-3.35 00:18:16.923 Starting 1 thread 00:18:22.194 00:18:22.194 job0: (groupid=0, jobs=1): err= 0: pid=76708: Tue Jul 23 02:13:30 2024 00:18:22.194 read: IOPS=19.0k, BW=18.5MiB/s (19.4MB/s)(92.8MiB/5006msec) 00:18:22.194 slat (usec): min=2, max=3029, avg=49.94, stdev=184.74 00:18:22.194 clat (usec): min=1773, max=11924, avg=6692.41, stdev=486.04 00:18:22.194 lat (usec): min=1789, max=11933, avg=6742.35, stdev=471.19 00:18:22.194 clat percentiles (usec): 00:18:22.194 | 1.00th=[ 5538], 5.00th=[ 5997], 10.00th=[ 6063], 20.00th=[ 6194], 00:18:22.194 | 30.00th=[ 6259], 40.00th=[ 6849], 50.00th=[ 6915], 60.00th=[ 6980], 00:18:22.194 | 70.00th=[ 6980], 80.00th=[ 6980], 90.00th=[ 7046], 95.00th=[ 7111], 00:18:22.194 | 99.00th=[ 7701], 99.50th=[ 8094], 99.90th=[ 9765], 99.95th=[10290], 00:18:22.194 | 99.99th=[11863] 00:18:22.194 bw ( KiB/s): min=18836, max=19044, per=100.00%, avg=18988.22, stdev=66.85, samples=9 00:18:22.194 iops : min=18836, max=19044, avg=18988.22, stdev=66.85, samples=9 00:18:22.194 lat (msec) : 2=0.02%, 4=0.05%, 10=99.87%, 20=0.06% 00:18:22.194 cpu : usr=5.37%, sys=11.21%, ctx=52293, majf=0, minf=32 00:18:22.194 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:18:22.194 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:22.194 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:22.194 issued rwts: total=94995,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:22.194 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:22.194 00:18:22.194 Run status group 0 (all jobs): 00:18:22.194 READ: bw=18.5MiB/s (19.4MB/s), 18.5MiB/s-18.5MiB/s (19.4MB/s-19.4MB/s), io=92.8MiB (97.3MB), run=5006-5006msec 00:18:22.194 00:18:22.194 Disk stats (read/write): 00:18:22.194 sda: ios=92810/0, merge=0/0, ticks=535173/0, in_queue=535173, util=98.11% 00:18:22.194 02:13:30 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@28 -- # rpc_cmd bdev_get_iostat -b Malloc0 00:18:22.194 02:13:30 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:22.194 02:13:30 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:18:22.194 02:13:30 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:22.194 02:13:30 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@28 -- # iostats='{ 00:18:22.194 "tick_rate": 2200000000, 00:18:22.194 "ticks": 2352301357071, 00:18:22.194 "bdevs": [ 00:18:22.194 { 00:18:22.194 "name": "Malloc0", 00:18:22.194 "bytes_read": 302002688, 00:18:22.194 "num_read_ops": 293897, 00:18:22.194 "bytes_written": 0, 00:18:22.194 "num_write_ops": 0, 00:18:22.194 "bytes_unmapped": 0, 00:18:22.194 "num_unmap_ops": 0, 00:18:22.194 "bytes_copied": 0, 00:18:22.194 "num_copy_ops": 0, 00:18:22.194 "read_latency_ticks": 621886946931, 00:18:22.194 "max_read_latency_ticks": 13633112, 00:18:22.194 "min_read_latency_ticks": 17596, 00:18:22.194 "write_latency_ticks": 0, 00:18:22.194 "max_write_latency_ticks": 0, 00:18:22.194 "min_write_latency_ticks": 0, 00:18:22.194 "unmap_latency_ticks": 0, 00:18:22.194 "max_unmap_latency_ticks": 0, 00:18:22.194 "min_unmap_latency_ticks": 0, 00:18:22.194 "copy_latency_ticks": 0, 00:18:22.194 "max_copy_latency_ticks": 0, 00:18:22.194 "min_copy_latency_ticks": 0, 00:18:22.194 "io_error": {} 00:18:22.194 } 00:18:22.194 ] 00:18:22.194 }' 00:18:22.194 02:13:30 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@29 -- # jq -r '.bdevs[0].num_read_ops' 00:18:22.194 02:13:30 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@29 -- # end_io_count=293897 00:18:22.194 02:13:30 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@30 -- # jq -r '.bdevs[0].bytes_read' 00:18:22.194 02:13:30 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@30 -- # end_bytes_read=302002688 00:18:22.194 02:13:30 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@32 -- # IOPS_RESULT=18999 00:18:22.194 02:13:30 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@33 -- # BANDWIDTH_RESULT=19454976 00:18:22.194 02:13:30 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@107 -- # verify_qos_limits 18999 19000 00:18:22.194 02:13:30 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@37 -- # local result=18999 00:18:22.194 02:13:30 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@38 -- # local limit=19000 00:18:22.194 02:13:30 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@40 -- # bc 00:18:22.194 02:13:30 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@40 -- # '[' 1 -eq 1 ']' 00:18:22.194 02:13:30 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@41 -- # bc 00:18:22.194 02:13:30 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@41 -- # '[' 1 -eq 1 ']' 00:18:22.194 02:13:30 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@110 -- # rpc_cmd bdev_set_qos_limit Malloc0 --rw_ios_per_sec 0 00:18:22.194 02:13:30 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:22.194 02:13:30 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:18:22.194 02:13:30 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:22.194 02:13:30 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@111 -- # run_fio Malloc0 00:18:22.194 02:13:30 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@14 -- # local bdev_name=Malloc0 00:18:22.194 02:13:30 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@15 -- # local iostats 00:18:22.194 02:13:30 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@16 -- # local start_io_count 00:18:22.194 02:13:30 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@17 -- # local start_bytes_read 00:18:22.194 02:13:30 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@18 -- # local end_io_count 00:18:22.194 02:13:30 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@19 -- # local end_bytes_read 00:18:22.194 02:13:30 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@20 -- # local run_time=5 00:18:22.194 02:13:30 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@22 -- # rpc_cmd bdev_get_iostat -b Malloc0 00:18:22.194 02:13:30 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:22.194 02:13:30 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:18:22.194 02:13:30 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:22.194 02:13:30 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@22 -- # iostats='{ 00:18:22.194 "tick_rate": 2200000000, 00:18:22.194 "ticks": 2352626942421, 00:18:22.194 "bdevs": [ 00:18:22.194 { 00:18:22.194 "name": "Malloc0", 00:18:22.194 "bytes_read": 302002688, 00:18:22.194 "num_read_ops": 293897, 00:18:22.194 "bytes_written": 0, 00:18:22.194 "num_write_ops": 0, 00:18:22.194 "bytes_unmapped": 0, 00:18:22.194 "num_unmap_ops": 0, 00:18:22.194 "bytes_copied": 0, 00:18:22.194 "num_copy_ops": 0, 00:18:22.194 "read_latency_ticks": 621886946931, 00:18:22.194 "max_read_latency_ticks": 13633112, 00:18:22.194 "min_read_latency_ticks": 17596, 00:18:22.194 "write_latency_ticks": 0, 00:18:22.194 "max_write_latency_ticks": 0, 00:18:22.194 "min_write_latency_ticks": 0, 00:18:22.194 "unmap_latency_ticks": 0, 00:18:22.194 "max_unmap_latency_ticks": 0, 00:18:22.194 "min_unmap_latency_ticks": 0, 00:18:22.194 "copy_latency_ticks": 0, 00:18:22.194 "max_copy_latency_ticks": 0, 00:18:22.194 "min_copy_latency_ticks": 0, 00:18:22.194 "io_error": {} 00:18:22.194 } 00:18:22.194 ] 00:18:22.194 }' 00:18:22.194 02:13:30 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@23 -- # jq -r '.bdevs[0].num_read_ops' 00:18:22.194 02:13:30 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@23 -- # start_io_count=293897 00:18:22.194 02:13:30 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@24 -- # jq -r '.bdevs[0].bytes_read' 00:18:22.194 02:13:30 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@24 -- # start_bytes_read=302002688 00:18:22.194 02:13:30 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p iscsi -i 1024 -d 128 -t randread -r 5 00:18:22.194 [global] 00:18:22.194 thread=1 00:18:22.194 invalidate=1 00:18:22.194 rw=randread 00:18:22.194 time_based=1 00:18:22.194 runtime=5 00:18:22.194 ioengine=libaio 00:18:22.194 direct=1 00:18:22.194 bs=1024 00:18:22.194 iodepth=128 00:18:22.194 norandommap=1 00:18:22.194 numjobs=1 00:18:22.194 00:18:22.194 [job0] 00:18:22.194 filename=/dev/sda 00:18:22.453 queue_depth set to 113 (sda) 00:18:22.453 job0: (g=0): rw=randread, bs=(R) 1024B-1024B, (W) 1024B-1024B, (T) 1024B-1024B, ioengine=libaio, iodepth=128 00:18:22.453 fio-3.35 00:18:22.453 Starting 1 thread 00:18:27.725 00:18:27.725 job0: (groupid=0, jobs=1): err= 0: pid=76796: Tue Jul 23 02:13:36 2024 00:18:27.725 read: IOPS=42.6k, BW=41.6MiB/s (43.6MB/s)(208MiB/5003msec) 00:18:27.725 slat (nsec): min=1434, max=1903.4k, avg=21915.19, stdev=70915.93 00:18:27.725 clat (usec): min=1183, max=5283, avg=2981.80, stdev=128.26 00:18:27.725 lat (usec): min=1216, max=5287, avg=3003.71, stdev=109.87 00:18:27.725 clat percentiles (usec): 00:18:27.725 | 1.00th=[ 2638], 5.00th=[ 2737], 10.00th=[ 2868], 20.00th=[ 2933], 00:18:27.725 | 30.00th=[ 2966], 40.00th=[ 2966], 50.00th=[ 2999], 60.00th=[ 2999], 00:18:27.725 | 70.00th=[ 3032], 80.00th=[ 3064], 90.00th=[ 3097], 95.00th=[ 3130], 00:18:27.725 | 99.00th=[ 3261], 99.50th=[ 3294], 99.90th=[ 3556], 99.95th=[ 4752], 00:18:27.725 | 99.99th=[ 4883] 00:18:27.725 bw ( KiB/s): min=42160, max=43008, per=100.00%, avg=42607.33, stdev=316.61, samples=9 00:18:27.725 iops : min=42160, max=43008, avg=42607.33, stdev=316.61, samples=9 00:18:27.725 lat (msec) : 2=0.03%, 4=99.88%, 10=0.09% 00:18:27.725 cpu : usr=7.42%, sys=13.65%, ctx=125302, majf=0, minf=32 00:18:27.725 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:18:27.725 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:27.725 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:27.725 issued rwts: total=213109,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:27.725 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:27.725 00:18:27.725 Run status group 0 (all jobs): 00:18:27.725 READ: bw=41.6MiB/s (43.6MB/s), 41.6MiB/s-41.6MiB/s (43.6MB/s-43.6MB/s), io=208MiB (218MB), run=5003-5003msec 00:18:27.725 00:18:27.725 Disk stats (read/write): 00:18:27.725 sda: ios=208307/0, merge=0/0, ticks=532957/0, in_queue=532957, util=98.13% 00:18:27.725 02:13:36 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@28 -- # rpc_cmd bdev_get_iostat -b Malloc0 00:18:27.725 02:13:36 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:27.725 02:13:36 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:18:27.725 02:13:36 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:27.725 02:13:36 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@28 -- # iostats='{ 00:18:27.725 "tick_rate": 2200000000, 00:18:27.725 "ticks": 2364585858455, 00:18:27.725 "bdevs": [ 00:18:27.725 { 00:18:27.725 "name": "Malloc0", 00:18:27.725 "bytes_read": 520226304, 00:18:27.725 "num_read_ops": 507006, 00:18:27.725 "bytes_written": 0, 00:18:27.725 "num_write_ops": 0, 00:18:27.725 "bytes_unmapped": 0, 00:18:27.725 "num_unmap_ops": 0, 00:18:27.725 "bytes_copied": 0, 00:18:27.725 "num_copy_ops": 0, 00:18:27.725 "read_latency_ticks": 676374654983, 00:18:27.725 "max_read_latency_ticks": 13633112, 00:18:27.725 "min_read_latency_ticks": 17596, 00:18:27.725 "write_latency_ticks": 0, 00:18:27.725 "max_write_latency_ticks": 0, 00:18:27.725 "min_write_latency_ticks": 0, 00:18:27.725 "unmap_latency_ticks": 0, 00:18:27.725 "max_unmap_latency_ticks": 0, 00:18:27.725 "min_unmap_latency_ticks": 0, 00:18:27.725 "copy_latency_ticks": 0, 00:18:27.725 "max_copy_latency_ticks": 0, 00:18:27.725 "min_copy_latency_ticks": 0, 00:18:27.725 "io_error": {} 00:18:27.725 } 00:18:27.725 ] 00:18:27.725 }' 00:18:27.725 02:13:36 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@29 -- # jq -r '.bdevs[0].num_read_ops' 00:18:27.725 02:13:36 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@29 -- # end_io_count=507006 00:18:27.725 02:13:36 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@30 -- # jq -r '.bdevs[0].bytes_read' 00:18:27.725 02:13:36 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@30 -- # end_bytes_read=520226304 00:18:27.725 02:13:36 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@32 -- # IOPS_RESULT=42621 00:18:27.725 02:13:36 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@33 -- # BANDWIDTH_RESULT=43644723 00:18:27.725 02:13:36 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@112 -- # '[' 42621 -gt 19000 ']' 00:18:27.725 02:13:36 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@115 -- # rpc_cmd bdev_set_qos_limit Malloc0 --rw_ios_per_sec 19000 00:18:27.725 02:13:36 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:27.725 02:13:36 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:18:27.725 02:13:36 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:27.725 02:13:36 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@116 -- # run_fio Malloc0 00:18:27.725 02:13:36 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@14 -- # local bdev_name=Malloc0 00:18:27.725 02:13:36 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@15 -- # local iostats 00:18:27.725 02:13:36 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@16 -- # local start_io_count 00:18:27.725 02:13:36 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@17 -- # local start_bytes_read 00:18:27.726 02:13:36 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@18 -- # local end_io_count 00:18:27.726 02:13:36 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@19 -- # local end_bytes_read 00:18:27.726 02:13:36 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@20 -- # local run_time=5 00:18:27.726 02:13:36 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@22 -- # rpc_cmd bdev_get_iostat -b Malloc0 00:18:27.726 02:13:36 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:27.726 02:13:36 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:18:27.726 02:13:36 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:27.726 02:13:36 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@22 -- # iostats='{ 00:18:27.726 "tick_rate": 2200000000, 00:18:27.726 "ticks": 2364876111747, 00:18:27.726 "bdevs": [ 00:18:27.726 { 00:18:27.726 "name": "Malloc0", 00:18:27.726 "bytes_read": 520226304, 00:18:27.726 "num_read_ops": 507006, 00:18:27.726 "bytes_written": 0, 00:18:27.726 "num_write_ops": 0, 00:18:27.726 "bytes_unmapped": 0, 00:18:27.726 "num_unmap_ops": 0, 00:18:27.726 "bytes_copied": 0, 00:18:27.726 "num_copy_ops": 0, 00:18:27.726 "read_latency_ticks": 676374654983, 00:18:27.726 "max_read_latency_ticks": 13633112, 00:18:27.726 "min_read_latency_ticks": 17596, 00:18:27.726 "write_latency_ticks": 0, 00:18:27.726 "max_write_latency_ticks": 0, 00:18:27.726 "min_write_latency_ticks": 0, 00:18:27.726 "unmap_latency_ticks": 0, 00:18:27.726 "max_unmap_latency_ticks": 0, 00:18:27.726 "min_unmap_latency_ticks": 0, 00:18:27.726 "copy_latency_ticks": 0, 00:18:27.726 "max_copy_latency_ticks": 0, 00:18:27.726 "min_copy_latency_ticks": 0, 00:18:27.726 "io_error": {} 00:18:27.726 } 00:18:27.726 ] 00:18:27.726 }' 00:18:27.726 02:13:36 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@23 -- # jq -r '.bdevs[0].num_read_ops' 00:18:27.726 02:13:36 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@23 -- # start_io_count=507006 00:18:27.726 02:13:36 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@24 -- # jq -r '.bdevs[0].bytes_read' 00:18:27.726 02:13:36 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@24 -- # start_bytes_read=520226304 00:18:27.726 02:13:36 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p iscsi -i 1024 -d 128 -t randread -r 5 00:18:27.726 [global] 00:18:27.726 thread=1 00:18:27.726 invalidate=1 00:18:27.726 rw=randread 00:18:27.726 time_based=1 00:18:27.726 runtime=5 00:18:27.726 ioengine=libaio 00:18:27.726 direct=1 00:18:27.726 bs=1024 00:18:27.726 iodepth=128 00:18:27.726 norandommap=1 00:18:27.726 numjobs=1 00:18:27.726 00:18:27.726 [job0] 00:18:27.726 filename=/dev/sda 00:18:27.985 queue_depth set to 113 (sda) 00:18:27.985 job0: (g=0): rw=randread, bs=(R) 1024B-1024B, (W) 1024B-1024B, (T) 1024B-1024B, ioengine=libaio, iodepth=128 00:18:27.985 fio-3.35 00:18:27.985 Starting 1 thread 00:18:33.257 00:18:33.257 job0: (groupid=0, jobs=1): err= 0: pid=76883: Tue Jul 23 02:13:41 2024 00:18:33.257 read: IOPS=19.0k, BW=18.5MiB/s (19.4MB/s)(92.8MiB/5006msec) 00:18:33.257 slat (usec): min=2, max=3427, avg=49.80, stdev=181.96 00:18:33.257 clat (usec): min=1002, max=12074, avg=6690.63, stdev=467.89 00:18:33.257 lat (usec): min=1011, max=12078, avg=6740.43, stdev=452.97 00:18:33.257 clat percentiles (usec): 00:18:33.257 | 1.00th=[ 5604], 5.00th=[ 6063], 10.00th=[ 6063], 20.00th=[ 6194], 00:18:33.257 | 30.00th=[ 6259], 40.00th=[ 6849], 50.00th=[ 6915], 60.00th=[ 6980], 00:18:33.257 | 70.00th=[ 6980], 80.00th=[ 6980], 90.00th=[ 7046], 95.00th=[ 7111], 00:18:33.257 | 99.00th=[ 7308], 99.50th=[ 7504], 99.90th=[ 9896], 99.95th=[10421], 00:18:33.257 | 99.99th=[11994] 00:18:33.257 bw ( KiB/s): min=18828, max=19038, per=100.00%, avg=18998.89, stdev=66.67, samples=9 00:18:33.257 iops : min=18828, max=19038, avg=18998.89, stdev=66.67, samples=9 00:18:33.257 lat (msec) : 2=0.01%, 4=0.08%, 10=99.83%, 20=0.08% 00:18:33.257 cpu : usr=5.89%, sys=11.33%, ctx=52356, majf=0, minf=32 00:18:33.257 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:18:33.257 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:33.257 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:33.257 issued rwts: total=95028,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:33.257 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:33.257 00:18:33.257 Run status group 0 (all jobs): 00:18:33.257 READ: bw=18.5MiB/s (19.4MB/s), 18.5MiB/s-18.5MiB/s (19.4MB/s-19.4MB/s), io=92.8MiB (97.3MB), run=5006-5006msec 00:18:33.257 00:18:33.257 Disk stats (read/write): 00:18:33.257 sda: ios=92862/0, merge=0/0, ticks=534374/0, in_queue=534374, util=98.12% 00:18:33.257 02:13:41 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@28 -- # rpc_cmd bdev_get_iostat -b Malloc0 00:18:33.257 02:13:41 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:33.257 02:13:41 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:18:33.257 02:13:41 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:33.257 02:13:41 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@28 -- # iostats='{ 00:18:33.257 "tick_rate": 2200000000, 00:18:33.257 "ticks": 2376804571047, 00:18:33.257 "bdevs": [ 00:18:33.257 { 00:18:33.257 "name": "Malloc0", 00:18:33.257 "bytes_read": 617534976, 00:18:33.257 "num_read_ops": 602034, 00:18:33.257 "bytes_written": 0, 00:18:33.257 "num_write_ops": 0, 00:18:33.257 "bytes_unmapped": 0, 00:18:33.257 "num_unmap_ops": 0, 00:18:33.258 "bytes_copied": 0, 00:18:33.258 "num_copy_ops": 0, 00:18:33.258 "read_latency_ticks": 1253656388447, 00:18:33.258 "max_read_latency_ticks": 13633112, 00:18:33.258 "min_read_latency_ticks": 17596, 00:18:33.258 "write_latency_ticks": 0, 00:18:33.258 "max_write_latency_ticks": 0, 00:18:33.258 "min_write_latency_ticks": 0, 00:18:33.258 "unmap_latency_ticks": 0, 00:18:33.258 "max_unmap_latency_ticks": 0, 00:18:33.258 "min_unmap_latency_ticks": 0, 00:18:33.258 "copy_latency_ticks": 0, 00:18:33.258 "max_copy_latency_ticks": 0, 00:18:33.258 "min_copy_latency_ticks": 0, 00:18:33.258 "io_error": {} 00:18:33.258 } 00:18:33.258 ] 00:18:33.258 }' 00:18:33.258 02:13:41 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@29 -- # jq -r '.bdevs[0].num_read_ops' 00:18:33.258 02:13:41 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@29 -- # end_io_count=602034 00:18:33.258 02:13:41 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@30 -- # jq -r '.bdevs[0].bytes_read' 00:18:33.258 02:13:41 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@30 -- # end_bytes_read=617534976 00:18:33.258 02:13:41 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@32 -- # IOPS_RESULT=19005 00:18:33.258 02:13:41 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@33 -- # BANDWIDTH_RESULT=19461734 00:18:33.258 02:13:41 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@117 -- # verify_qos_limits 19005 19000 00:18:33.258 02:13:41 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@37 -- # local result=19005 00:18:33.258 02:13:41 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@38 -- # local limit=19000 00:18:33.258 02:13:41 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@40 -- # bc 00:18:33.258 02:13:41 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@40 -- # '[' 1 -eq 1 ']' 00:18:33.258 02:13:41 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@41 -- # bc 00:18:33.258 02:13:41 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@41 -- # '[' 1 -eq 1 ']' 00:18:33.258 I/O rate limiting tests successful 00:18:33.258 02:13:41 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@119 -- # echo 'I/O rate limiting tests successful' 00:18:33.258 02:13:41 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@122 -- # rpc_cmd bdev_set_qos_limit Malloc0 --rw_ios_per_sec 0 --rw_mbytes_per_sec 19 00:18:33.258 02:13:41 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:33.258 02:13:41 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:18:33.258 02:13:41 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:33.258 02:13:41 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@123 -- # run_fio Malloc0 00:18:33.258 02:13:41 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@14 -- # local bdev_name=Malloc0 00:18:33.258 02:13:41 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@15 -- # local iostats 00:18:33.258 02:13:41 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@16 -- # local start_io_count 00:18:33.258 02:13:41 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@17 -- # local start_bytes_read 00:18:33.258 02:13:41 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@18 -- # local end_io_count 00:18:33.258 02:13:41 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@19 -- # local end_bytes_read 00:18:33.258 02:13:41 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@20 -- # local run_time=5 00:18:33.258 02:13:41 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@22 -- # rpc_cmd bdev_get_iostat -b Malloc0 00:18:33.258 02:13:41 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:33.258 02:13:41 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:18:33.258 02:13:41 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:33.258 02:13:41 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@22 -- # iostats='{ 00:18:33.258 "tick_rate": 2200000000, 00:18:33.258 "ticks": 2377112582035, 00:18:33.258 "bdevs": [ 00:18:33.258 { 00:18:33.258 "name": "Malloc0", 00:18:33.258 "bytes_read": 617534976, 00:18:33.258 "num_read_ops": 602034, 00:18:33.258 "bytes_written": 0, 00:18:33.258 "num_write_ops": 0, 00:18:33.258 "bytes_unmapped": 0, 00:18:33.258 "num_unmap_ops": 0, 00:18:33.258 "bytes_copied": 0, 00:18:33.258 "num_copy_ops": 0, 00:18:33.258 "read_latency_ticks": 1253656388447, 00:18:33.258 "max_read_latency_ticks": 13633112, 00:18:33.258 "min_read_latency_ticks": 17596, 00:18:33.258 "write_latency_ticks": 0, 00:18:33.258 "max_write_latency_ticks": 0, 00:18:33.258 "min_write_latency_ticks": 0, 00:18:33.258 "unmap_latency_ticks": 0, 00:18:33.258 "max_unmap_latency_ticks": 0, 00:18:33.258 "min_unmap_latency_ticks": 0, 00:18:33.258 "copy_latency_ticks": 0, 00:18:33.258 "max_copy_latency_ticks": 0, 00:18:33.258 "min_copy_latency_ticks": 0, 00:18:33.258 "io_error": {} 00:18:33.258 } 00:18:33.258 ] 00:18:33.258 }' 00:18:33.258 02:13:41 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@23 -- # jq -r '.bdevs[0].num_read_ops' 00:18:33.258 02:13:41 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@23 -- # start_io_count=602034 00:18:33.258 02:13:41 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@24 -- # jq -r '.bdevs[0].bytes_read' 00:18:33.258 02:13:42 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@24 -- # start_bytes_read=617534976 00:18:33.258 02:13:42 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p iscsi -i 1024 -d 128 -t randread -r 5 00:18:33.517 [global] 00:18:33.517 thread=1 00:18:33.517 invalidate=1 00:18:33.517 rw=randread 00:18:33.517 time_based=1 00:18:33.517 runtime=5 00:18:33.517 ioengine=libaio 00:18:33.517 direct=1 00:18:33.517 bs=1024 00:18:33.517 iodepth=128 00:18:33.517 norandommap=1 00:18:33.517 numjobs=1 00:18:33.517 00:18:33.517 [job0] 00:18:33.517 filename=/dev/sda 00:18:33.517 queue_depth set to 113 (sda) 00:18:33.517 job0: (g=0): rw=randread, bs=(R) 1024B-1024B, (W) 1024B-1024B, (T) 1024B-1024B, ioengine=libaio, iodepth=128 00:18:33.517 fio-3.35 00:18:33.517 Starting 1 thread 00:18:38.789 00:18:38.789 job0: (groupid=0, jobs=1): err= 0: pid=76972: Tue Jul 23 02:13:47 2024 00:18:38.789 read: IOPS=19.5k, BW=19.0MiB/s (19.9MB/s)(95.1MiB/5005msec) 00:18:38.789 slat (usec): min=2, max=2224, avg=48.54, stdev=197.11 00:18:38.789 clat (usec): min=1074, max=11124, avg=6527.51, stdev=607.20 00:18:38.789 lat (usec): min=1081, max=11128, avg=6576.05, stdev=605.56 00:18:38.789 clat percentiles (usec): 00:18:38.789 | 1.00th=[ 5211], 5.00th=[ 5538], 10.00th=[ 5800], 20.00th=[ 6063], 00:18:38.789 | 30.00th=[ 6194], 40.00th=[ 6325], 50.00th=[ 6456], 60.00th=[ 6718], 00:18:38.789 | 70.00th=[ 6915], 80.00th=[ 7111], 90.00th=[ 7308], 95.00th=[ 7439], 00:18:38.789 | 99.00th=[ 7767], 99.50th=[ 7898], 99.90th=[ 8094], 99.95th=[ 8848], 00:18:38.789 | 99.99th=[10421] 00:18:38.789 bw ( KiB/s): min=19428, max=19494, per=100.00%, avg=19469.56, stdev=22.93, samples=9 00:18:38.789 iops : min=19428, max=19494, avg=19469.56, stdev=22.93, samples=9 00:18:38.789 lat (msec) : 2=0.03%, 4=0.07%, 10=99.87%, 20=0.03% 00:18:38.789 cpu : usr=5.94%, sys=11.13%, ctx=52817, majf=0, minf=32 00:18:38.789 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:18:38.789 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:38.789 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:38.789 issued rwts: total=97390,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:38.789 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:38.789 00:18:38.789 Run status group 0 (all jobs): 00:18:38.789 READ: bw=19.0MiB/s (19.9MB/s), 19.0MiB/s-19.0MiB/s (19.9MB/s-19.9MB/s), io=95.1MiB (99.7MB), run=5005-5005msec 00:18:38.789 00:18:38.789 Disk stats (read/write): 00:18:38.789 sda: ios=95155/0, merge=0/0, ticks=529690/0, in_queue=529690, util=98.11% 00:18:38.789 02:13:47 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@28 -- # rpc_cmd bdev_get_iostat -b Malloc0 00:18:38.789 02:13:47 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:38.789 02:13:47 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:18:38.789 02:13:47 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:38.789 02:13:47 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@28 -- # iostats='{ 00:18:38.789 "tick_rate": 2200000000, 00:18:38.789 "ticks": 2389018427051, 00:18:38.789 "bdevs": [ 00:18:38.789 { 00:18:38.789 "name": "Malloc0", 00:18:38.789 "bytes_read": 717262336, 00:18:38.789 "num_read_ops": 699424, 00:18:38.789 "bytes_written": 0, 00:18:38.789 "num_write_ops": 0, 00:18:38.789 "bytes_unmapped": 0, 00:18:38.789 "num_unmap_ops": 0, 00:18:38.789 "bytes_copied": 0, 00:18:38.789 "num_copy_ops": 0, 00:18:38.789 "read_latency_ticks": 1782685810613, 00:18:38.789 "max_read_latency_ticks": 13633112, 00:18:38.789 "min_read_latency_ticks": 17596, 00:18:38.789 "write_latency_ticks": 0, 00:18:38.789 "max_write_latency_ticks": 0, 00:18:38.789 "min_write_latency_ticks": 0, 00:18:38.789 "unmap_latency_ticks": 0, 00:18:38.789 "max_unmap_latency_ticks": 0, 00:18:38.789 "min_unmap_latency_ticks": 0, 00:18:38.789 "copy_latency_ticks": 0, 00:18:38.789 "max_copy_latency_ticks": 0, 00:18:38.789 "min_copy_latency_ticks": 0, 00:18:38.789 "io_error": {} 00:18:38.789 } 00:18:38.789 ] 00:18:38.789 }' 00:18:38.789 02:13:47 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@29 -- # jq -r '.bdevs[0].num_read_ops' 00:18:38.789 02:13:47 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@29 -- # end_io_count=699424 00:18:38.789 02:13:47 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@30 -- # jq -r '.bdevs[0].bytes_read' 00:18:38.789 02:13:47 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@30 -- # end_bytes_read=717262336 00:18:38.789 02:13:47 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@32 -- # IOPS_RESULT=19478 00:18:38.789 02:13:47 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@33 -- # BANDWIDTH_RESULT=19945472 00:18:38.789 02:13:47 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@124 -- # verify_qos_limits 19945472 19922944 00:18:38.790 02:13:47 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@37 -- # local result=19945472 00:18:38.790 02:13:47 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@38 -- # local limit=19922944 00:18:38.790 02:13:47 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@40 -- # bc 00:18:38.790 02:13:47 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@40 -- # '[' 1 -eq 1 ']' 00:18:38.790 02:13:47 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@41 -- # bc 00:18:38.790 02:13:47 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@41 -- # '[' 1 -eq 1 ']' 00:18:38.790 02:13:47 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@127 -- # rpc_cmd bdev_set_qos_limit Malloc0 --rw_mbytes_per_sec 0 00:18:38.790 02:13:47 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:38.790 02:13:47 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:18:38.790 02:13:47 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:38.790 02:13:47 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@128 -- # run_fio Malloc0 00:18:38.790 02:13:47 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@14 -- # local bdev_name=Malloc0 00:18:38.790 02:13:47 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@15 -- # local iostats 00:18:38.790 02:13:47 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@16 -- # local start_io_count 00:18:38.790 02:13:47 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@17 -- # local start_bytes_read 00:18:38.790 02:13:47 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@18 -- # local end_io_count 00:18:38.790 02:13:47 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@19 -- # local end_bytes_read 00:18:38.790 02:13:47 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@20 -- # local run_time=5 00:18:38.790 02:13:47 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@22 -- # rpc_cmd bdev_get_iostat -b Malloc0 00:18:38.790 02:13:47 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:38.790 02:13:47 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:18:38.790 02:13:47 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:38.790 02:13:47 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@22 -- # iostats='{ 00:18:38.790 "tick_rate": 2200000000, 00:18:38.790 "ticks": 2389344127603, 00:18:38.790 "bdevs": [ 00:18:38.790 { 00:18:38.790 "name": "Malloc0", 00:18:38.790 "bytes_read": 717262336, 00:18:38.790 "num_read_ops": 699424, 00:18:38.790 "bytes_written": 0, 00:18:38.790 "num_write_ops": 0, 00:18:38.790 "bytes_unmapped": 0, 00:18:38.790 "num_unmap_ops": 0, 00:18:38.790 "bytes_copied": 0, 00:18:38.790 "num_copy_ops": 0, 00:18:38.790 "read_latency_ticks": 1782685810613, 00:18:38.790 "max_read_latency_ticks": 13633112, 00:18:38.790 "min_read_latency_ticks": 17596, 00:18:38.790 "write_latency_ticks": 0, 00:18:38.790 "max_write_latency_ticks": 0, 00:18:38.790 "min_write_latency_ticks": 0, 00:18:38.790 "unmap_latency_ticks": 0, 00:18:38.790 "max_unmap_latency_ticks": 0, 00:18:38.790 "min_unmap_latency_ticks": 0, 00:18:38.790 "copy_latency_ticks": 0, 00:18:38.790 "max_copy_latency_ticks": 0, 00:18:38.790 "min_copy_latency_ticks": 0, 00:18:38.790 "io_error": {} 00:18:38.790 } 00:18:38.790 ] 00:18:38.790 }' 00:18:38.790 02:13:47 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@23 -- # jq -r '.bdevs[0].num_read_ops' 00:18:38.790 02:13:47 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@23 -- # start_io_count=699424 00:18:38.790 02:13:47 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@24 -- # jq -r '.bdevs[0].bytes_read' 00:18:39.047 02:13:47 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@24 -- # start_bytes_read=717262336 00:18:39.047 02:13:47 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p iscsi -i 1024 -d 128 -t randread -r 5 00:18:39.047 [global] 00:18:39.047 thread=1 00:18:39.047 invalidate=1 00:18:39.047 rw=randread 00:18:39.047 time_based=1 00:18:39.047 runtime=5 00:18:39.047 ioengine=libaio 00:18:39.047 direct=1 00:18:39.047 bs=1024 00:18:39.047 iodepth=128 00:18:39.047 norandommap=1 00:18:39.047 numjobs=1 00:18:39.047 00:18:39.047 [job0] 00:18:39.047 filename=/dev/sda 00:18:39.047 queue_depth set to 113 (sda) 00:18:39.047 job0: (g=0): rw=randread, bs=(R) 1024B-1024B, (W) 1024B-1024B, (T) 1024B-1024B, ioengine=libaio, iodepth=128 00:18:39.047 fio-3.35 00:18:39.047 Starting 1 thread 00:18:44.345 00:18:44.345 job0: (groupid=0, jobs=1): err= 0: pid=77066: Tue Jul 23 02:13:52 2024 00:18:44.345 read: IOPS=37.4k, BW=36.5MiB/s (38.3MB/s)(183MiB/5003msec) 00:18:44.346 slat (usec): min=2, max=575, avg=24.91, stdev=77.71 00:18:44.346 clat (usec): min=1118, max=5634, avg=3399.15, stdev=174.29 00:18:44.346 lat (usec): min=1125, max=5636, avg=3424.05, stdev=157.60 00:18:44.346 clat percentiles (usec): 00:18:44.346 | 1.00th=[ 2966], 5.00th=[ 3130], 10.00th=[ 3228], 20.00th=[ 3294], 00:18:44.346 | 30.00th=[ 3326], 40.00th=[ 3359], 50.00th=[ 3392], 60.00th=[ 3425], 00:18:44.346 | 70.00th=[ 3458], 80.00th=[ 3523], 90.00th=[ 3621], 95.00th=[ 3687], 00:18:44.346 | 99.00th=[ 3851], 99.50th=[ 3916], 99.90th=[ 4293], 99.95th=[ 4424], 00:18:44.346 | 99.99th=[ 5211] 00:18:44.346 bw ( KiB/s): min=36072, max=38048, per=99.90%, avg=37330.44, stdev=564.08, samples=9 00:18:44.346 iops : min=36072, max=38048, avg=37330.44, stdev=564.08, samples=9 00:18:44.346 lat (msec) : 2=0.03%, 4=99.68%, 10=0.29% 00:18:44.346 cpu : usr=8.40%, sys=14.25%, ctx=104936, majf=0, minf=32 00:18:44.346 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:18:44.346 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:44.346 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:44.346 issued rwts: total=186949,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:44.346 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:44.346 00:18:44.346 Run status group 0 (all jobs): 00:18:44.346 READ: bw=36.5MiB/s (38.3MB/s), 36.5MiB/s-36.5MiB/s (38.3MB/s-38.3MB/s), io=183MiB (191MB), run=5003-5003msec 00:18:44.346 00:18:44.346 Disk stats (read/write): 00:18:44.346 sda: ios=182641/0, merge=0/0, ticks=534694/0, in_queue=534694, util=98.11% 00:18:44.346 02:13:52 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@28 -- # rpc_cmd bdev_get_iostat -b Malloc0 00:18:44.346 02:13:52 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:44.346 02:13:52 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:18:44.346 02:13:52 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:44.346 02:13:52 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@28 -- # iostats='{ 00:18:44.346 "tick_rate": 2200000000, 00:18:44.346 "ticks": 2401299212881, 00:18:44.346 "bdevs": [ 00:18:44.346 { 00:18:44.346 "name": "Malloc0", 00:18:44.346 "bytes_read": 908698112, 00:18:44.346 "num_read_ops": 886373, 00:18:44.346 "bytes_written": 0, 00:18:44.346 "num_write_ops": 0, 00:18:44.346 "bytes_unmapped": 0, 00:18:44.346 "num_unmap_ops": 0, 00:18:44.346 "bytes_copied": 0, 00:18:44.346 "num_copy_ops": 0, 00:18:44.346 "read_latency_ticks": 1837003585566, 00:18:44.346 "max_read_latency_ticks": 13633112, 00:18:44.346 "min_read_latency_ticks": 17596, 00:18:44.346 "write_latency_ticks": 0, 00:18:44.346 "max_write_latency_ticks": 0, 00:18:44.346 "min_write_latency_ticks": 0, 00:18:44.346 "unmap_latency_ticks": 0, 00:18:44.346 "max_unmap_latency_ticks": 0, 00:18:44.346 "min_unmap_latency_ticks": 0, 00:18:44.346 "copy_latency_ticks": 0, 00:18:44.346 "max_copy_latency_ticks": 0, 00:18:44.346 "min_copy_latency_ticks": 0, 00:18:44.346 "io_error": {} 00:18:44.346 } 00:18:44.346 ] 00:18:44.346 }' 00:18:44.346 02:13:52 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@29 -- # jq -r '.bdevs[0].num_read_ops' 00:18:44.346 02:13:52 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@29 -- # end_io_count=886373 00:18:44.346 02:13:52 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@30 -- # jq -r '.bdevs[0].bytes_read' 00:18:44.346 02:13:53 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@30 -- # end_bytes_read=908698112 00:18:44.346 02:13:53 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@32 -- # IOPS_RESULT=37389 00:18:44.346 02:13:53 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@33 -- # BANDWIDTH_RESULT=38287155 00:18:44.346 02:13:53 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@129 -- # '[' 38287155 -gt 19922944 ']' 00:18:44.346 02:13:53 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@132 -- # rpc_cmd bdev_set_qos_limit Malloc0 --rw_mbytes_per_sec 19 --r_mbytes_per_sec 9 00:18:44.346 02:13:53 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:44.346 02:13:53 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:18:44.346 02:13:53 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:44.346 02:13:53 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@133 -- # run_fio Malloc0 00:18:44.346 02:13:53 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@14 -- # local bdev_name=Malloc0 00:18:44.346 02:13:53 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@15 -- # local iostats 00:18:44.346 02:13:53 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@16 -- # local start_io_count 00:18:44.346 02:13:53 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@17 -- # local start_bytes_read 00:18:44.346 02:13:53 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@18 -- # local end_io_count 00:18:44.346 02:13:53 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@19 -- # local end_bytes_read 00:18:44.346 02:13:53 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@20 -- # local run_time=5 00:18:44.346 02:13:53 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@22 -- # rpc_cmd bdev_get_iostat -b Malloc0 00:18:44.346 02:13:53 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:44.346 02:13:53 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:18:44.346 02:13:53 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:44.346 02:13:53 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@22 -- # iostats='{ 00:18:44.346 "tick_rate": 2200000000, 00:18:44.346 "ticks": 2401598398677, 00:18:44.346 "bdevs": [ 00:18:44.346 { 00:18:44.346 "name": "Malloc0", 00:18:44.346 "bytes_read": 908698112, 00:18:44.346 "num_read_ops": 886373, 00:18:44.346 "bytes_written": 0, 00:18:44.346 "num_write_ops": 0, 00:18:44.346 "bytes_unmapped": 0, 00:18:44.346 "num_unmap_ops": 0, 00:18:44.346 "bytes_copied": 0, 00:18:44.346 "num_copy_ops": 0, 00:18:44.346 "read_latency_ticks": 1837003585566, 00:18:44.346 "max_read_latency_ticks": 13633112, 00:18:44.346 "min_read_latency_ticks": 17596, 00:18:44.346 "write_latency_ticks": 0, 00:18:44.346 "max_write_latency_ticks": 0, 00:18:44.346 "min_write_latency_ticks": 0, 00:18:44.346 "unmap_latency_ticks": 0, 00:18:44.346 "max_unmap_latency_ticks": 0, 00:18:44.346 "min_unmap_latency_ticks": 0, 00:18:44.346 "copy_latency_ticks": 0, 00:18:44.346 "max_copy_latency_ticks": 0, 00:18:44.346 "min_copy_latency_ticks": 0, 00:18:44.346 "io_error": {} 00:18:44.346 } 00:18:44.346 ] 00:18:44.346 }' 00:18:44.346 02:13:53 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@23 -- # jq -r '.bdevs[0].num_read_ops' 00:18:44.346 02:13:53 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@23 -- # start_io_count=886373 00:18:44.346 02:13:53 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@24 -- # jq -r '.bdevs[0].bytes_read' 00:18:44.610 02:13:53 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@24 -- # start_bytes_read=908698112 00:18:44.610 02:13:53 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p iscsi -i 1024 -d 128 -t randread -r 5 00:18:44.610 [global] 00:18:44.610 thread=1 00:18:44.610 invalidate=1 00:18:44.610 rw=randread 00:18:44.610 time_based=1 00:18:44.610 runtime=5 00:18:44.610 ioengine=libaio 00:18:44.610 direct=1 00:18:44.610 bs=1024 00:18:44.610 iodepth=128 00:18:44.610 norandommap=1 00:18:44.610 numjobs=1 00:18:44.610 00:18:44.610 [job0] 00:18:44.610 filename=/dev/sda 00:18:44.610 queue_depth set to 113 (sda) 00:18:44.610 job0: (g=0): rw=randread, bs=(R) 1024B-1024B, (W) 1024B-1024B, (T) 1024B-1024B, ioengine=libaio, iodepth=128 00:18:44.610 fio-3.35 00:18:44.610 Starting 1 thread 00:18:49.880 00:18:49.880 job0: (groupid=0, jobs=1): err= 0: pid=77147: Tue Jul 23 02:13:58 2024 00:18:49.880 read: IOPS=9218, BW=9218KiB/s (9440kB/s)(45.1MiB/5012msec) 00:18:49.880 slat (nsec): min=1488, max=2171.7k, avg=104943.05, stdev=275298.16 00:18:49.880 clat (usec): min=1526, max=25823, avg=13776.64, stdev=686.42 00:18:49.880 lat (usec): min=1564, max=25831, avg=13881.58, stdev=646.49 00:18:49.880 clat percentiles (usec): 00:18:49.880 | 1.00th=[12518], 5.00th=[13042], 10.00th=[13042], 20.00th=[13173], 00:18:49.880 | 30.00th=[13829], 40.00th=[13960], 50.00th=[13960], 60.00th=[13960], 00:18:49.880 | 70.00th=[14091], 80.00th=[14091], 90.00th=[14091], 95.00th=[14222], 00:18:49.880 | 99.00th=[14353], 99.50th=[14484], 99.90th=[20579], 99.95th=[23725], 00:18:49.880 | 99.99th=[25822] 00:18:49.880 bw ( KiB/s): min= 9090, max= 9252, per=99.92%, avg=9211.50, stdev=44.37, samples=10 00:18:49.880 iops : min= 9090, max= 9252, avg=9211.50, stdev=44.37, samples=10 00:18:49.880 lat (msec) : 2=0.03%, 4=0.03%, 10=0.16%, 20=99.67%, 50=0.11% 00:18:49.880 cpu : usr=3.57%, sys=6.84%, ctx=36083, majf=0, minf=32 00:18:49.880 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:18:49.880 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:49.880 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:49.880 issued rwts: total=46203,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:49.880 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:49.880 00:18:49.880 Run status group 0 (all jobs): 00:18:49.880 READ: bw=9218KiB/s (9440kB/s), 9218KiB/s-9218KiB/s (9440kB/s-9440kB/s), io=45.1MiB (47.3MB), run=5012-5012msec 00:18:49.880 00:18:49.880 Disk stats (read/write): 00:18:49.880 sda: ios=45075/0, merge=0/0, ticks=541531/0, in_queue=541531, util=98.11% 00:18:49.880 02:13:58 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@28 -- # rpc_cmd bdev_get_iostat -b Malloc0 00:18:49.880 02:13:58 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:49.880 02:13:58 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:18:49.880 02:13:58 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:49.880 02:13:58 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@28 -- # iostats='{ 00:18:49.880 "tick_rate": 2200000000, 00:18:49.880 "ticks": 2413565611759, 00:18:49.880 "bdevs": [ 00:18:49.880 { 00:18:49.880 "name": "Malloc0", 00:18:49.880 "bytes_read": 956009984, 00:18:49.880 "num_read_ops": 932576, 00:18:49.880 "bytes_written": 0, 00:18:49.880 "num_write_ops": 0, 00:18:49.880 "bytes_unmapped": 0, 00:18:49.880 "num_unmap_ops": 0, 00:18:49.880 "bytes_copied": 0, 00:18:49.880 "num_copy_ops": 0, 00:18:49.880 "read_latency_ticks": 2490714030191, 00:18:49.880 "max_read_latency_ticks": 16698906, 00:18:49.880 "min_read_latency_ticks": 17596, 00:18:49.880 "write_latency_ticks": 0, 00:18:49.880 "max_write_latency_ticks": 0, 00:18:49.880 "min_write_latency_ticks": 0, 00:18:49.880 "unmap_latency_ticks": 0, 00:18:49.880 "max_unmap_latency_ticks": 0, 00:18:49.880 "min_unmap_latency_ticks": 0, 00:18:49.880 "copy_latency_ticks": 0, 00:18:49.880 "max_copy_latency_ticks": 0, 00:18:49.880 "min_copy_latency_ticks": 0, 00:18:49.880 "io_error": {} 00:18:49.880 } 00:18:49.880 ] 00:18:49.880 }' 00:18:49.880 02:13:58 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@29 -- # jq -r '.bdevs[0].num_read_ops' 00:18:49.880 02:13:58 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@29 -- # end_io_count=932576 00:18:49.880 02:13:58 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@30 -- # jq -r '.bdevs[0].bytes_read' 00:18:49.880 02:13:58 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@30 -- # end_bytes_read=956009984 00:18:49.880 02:13:58 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@32 -- # IOPS_RESULT=9240 00:18:49.880 02:13:58 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@33 -- # BANDWIDTH_RESULT=9462374 00:18:49.880 02:13:58 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@134 -- # verify_qos_limits 9462374 9437184 00:18:49.880 02:13:58 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@37 -- # local result=9462374 00:18:49.880 02:13:58 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@38 -- # local limit=9437184 00:18:49.880 02:13:58 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@40 -- # bc 00:18:49.880 02:13:58 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@40 -- # '[' 1 -eq 1 ']' 00:18:49.880 02:13:58 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@41 -- # bc 00:18:49.880 I/O bandwidth limiting tests successful 00:18:49.880 Cleaning up iSCSI connection 00:18:49.880 02:13:58 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@41 -- # '[' 1 -eq 1 ']' 00:18:49.880 02:13:58 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@136 -- # echo 'I/O bandwidth limiting tests successful' 00:18:49.880 02:13:58 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@138 -- # iscsicleanup 00:18:49.880 02:13:58 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@980 -- # echo 'Cleaning up iSCSI connection' 00:18:49.880 02:13:58 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@981 -- # iscsiadm -m node --logout 00:18:50.139 Logging out of session [sid: 20, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] 00:18:50.139 Logout of [sid: 20, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] successful. 00:18:50.139 02:13:58 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@982 -- # iscsiadm -m node -o delete 00:18:50.139 02:13:58 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@983 -- # rm -rf 00:18:50.139 02:13:58 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@139 -- # rpc_cmd iscsi_delete_target_node iqn.2016-06.io.spdk:Target1 00:18:50.139 02:13:58 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:50.139 02:13:58 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:18:50.139 02:13:58 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:50.139 02:13:58 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@141 -- # rm -f ./local-job0-0-verify.state 00:18:50.139 02:13:58 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:18:50.139 02:13:58 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@143 -- # killprocess 76529 00:18:50.139 02:13:58 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@948 -- # '[' -z 76529 ']' 00:18:50.139 02:13:58 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@952 -- # kill -0 76529 00:18:50.139 02:13:58 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@953 -- # uname 00:18:50.139 02:13:58 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:50.139 02:13:58 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 76529 00:18:50.139 killing process with pid 76529 00:18:50.139 02:13:58 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:50.139 02:13:58 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:50.139 02:13:58 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@966 -- # echo 'killing process with pid 76529' 00:18:50.139 02:13:58 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@967 -- # kill 76529 00:18:50.139 02:13:58 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@972 -- # wait 76529 00:18:52.085 02:14:00 iscsi_tgt.iscsi_tgt_qos -- qos/qos.sh@145 -- # iscsitestfini 00:18:52.085 02:14:00 iscsi_tgt.iscsi_tgt_qos -- iscsi_tgt/common.sh@131 -- # '[' '' == iso ']' 00:18:52.085 00:18:52.085 real 0m43.655s 00:18:52.085 user 0m38.770s 00:18:52.085 sys 0m10.749s 00:18:52.085 ************************************ 00:18:52.085 END TEST iscsi_tgt_qos 00:18:52.085 ************************************ 00:18:52.085 02:14:00 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:52.085 02:14:00 iscsi_tgt.iscsi_tgt_qos -- common/autotest_common.sh@10 -- # set +x 00:18:52.085 02:14:00 iscsi_tgt -- common/autotest_common.sh@1142 -- # return 0 00:18:52.085 02:14:00 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@39 -- # run_test iscsi_tgt_ip_migration /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/ip_migration/ip_migration.sh 00:18:52.085 02:14:00 iscsi_tgt -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:18:52.085 02:14:00 iscsi_tgt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:52.085 02:14:00 iscsi_tgt -- common/autotest_common.sh@10 -- # set +x 00:18:52.085 ************************************ 00:18:52.085 START TEST iscsi_tgt_ip_migration 00:18:52.085 ************************************ 00:18:52.085 02:14:00 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/ip_migration/ip_migration.sh 00:18:52.085 * Looking for test storage... 00:18:52.085 * Found test storage at /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/ip_migration 00:18:52.085 02:14:00 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:18:52.085 02:14:00 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:18:52.085 02:14:00 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:18:52.085 02:14:00 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:18:52.085 02:14:00 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:18:52.085 02:14:00 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:18:52.085 02:14:00 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:18:52.085 02:14:00 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:18:52.085 02:14:00 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:18:52.085 02:14:00 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:18:52.085 02:14:00 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:18:52.085 02:14:00 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:18:52.085 02:14:00 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:18:52.085 02:14:00 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:18:52.085 02:14:00 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:18:52.085 02:14:00 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:18:52.085 02:14:00 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:18:52.085 02:14:00 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:18:52.085 02:14:00 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:18:52.085 02:14:00 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:18:52.085 02:14:00 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@11 -- # iscsitestinit 00:18:52.085 02:14:00 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@109 -- # '[' '' == iso ']' 00:18:52.085 02:14:00 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@13 -- # fio_py=/home/vagrant/spdk_repo/spdk/scripts/fio-wrapper 00:18:52.085 02:14:00 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@14 -- # pids=() 00:18:52.085 02:14:00 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@16 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:18:52.344 02:14:00 iscsi_tgt.iscsi_tgt_ip_migration -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:18:52.345 02:14:00 iscsi_tgt.iscsi_tgt_ip_migration -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:18:52.345 02:14:00 iscsi_tgt.iscsi_tgt_ip_migration -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:18:52.345 02:14:00 iscsi_tgt.iscsi_tgt_ip_migration -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:18:52.345 02:14:00 iscsi_tgt.iscsi_tgt_ip_migration -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:18:52.345 02:14:00 iscsi_tgt.iscsi_tgt_ip_migration -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:18:52.345 02:14:00 iscsi_tgt.iscsi_tgt_ip_migration -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:18:52.345 02:14:00 iscsi_tgt.iscsi_tgt_ip_migration -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:18:52.345 02:14:00 iscsi_tgt.iscsi_tgt_ip_migration -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:18:52.345 02:14:00 iscsi_tgt.iscsi_tgt_ip_migration -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:18:52.345 02:14:00 iscsi_tgt.iscsi_tgt_ip_migration -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:18:52.345 02:14:00 iscsi_tgt.iscsi_tgt_ip_migration -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:18:52.345 02:14:00 iscsi_tgt.iscsi_tgt_ip_migration -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:18:52.345 02:14:00 iscsi_tgt.iscsi_tgt_ip_migration -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:18:52.345 02:14:00 iscsi_tgt.iscsi_tgt_ip_migration -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:18:52.345 #define SPDK_CONFIG_H 00:18:52.345 #define SPDK_CONFIG_APPS 1 00:18:52.345 #define SPDK_CONFIG_ARCH native 00:18:52.345 #define SPDK_CONFIG_ASAN 1 00:18:52.345 #undef SPDK_CONFIG_AVAHI 00:18:52.345 #undef SPDK_CONFIG_CET 00:18:52.345 #define SPDK_CONFIG_COVERAGE 1 00:18:52.345 #define SPDK_CONFIG_CROSS_PREFIX 00:18:52.345 #undef SPDK_CONFIG_CRYPTO 00:18:52.345 #undef SPDK_CONFIG_CRYPTO_MLX5 00:18:52.345 #undef SPDK_CONFIG_CUSTOMOCF 00:18:52.345 #undef SPDK_CONFIG_DAOS 00:18:52.345 #define SPDK_CONFIG_DAOS_DIR 00:18:52.345 #define SPDK_CONFIG_DEBUG 1 00:18:52.345 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:18:52.345 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:18:52.345 #define SPDK_CONFIG_DPDK_INC_DIR 00:18:52.345 #define SPDK_CONFIG_DPDK_LIB_DIR 00:18:52.345 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:18:52.345 #undef SPDK_CONFIG_DPDK_UADK 00:18:52.345 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:18:52.345 #define SPDK_CONFIG_EXAMPLES 1 00:18:52.345 #undef SPDK_CONFIG_FC 00:18:52.345 #define SPDK_CONFIG_FC_PATH 00:18:52.345 #define SPDK_CONFIG_FIO_PLUGIN 1 00:18:52.345 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:18:52.345 #undef SPDK_CONFIG_FUSE 00:18:52.345 #undef SPDK_CONFIG_FUZZER 00:18:52.345 #define SPDK_CONFIG_FUZZER_LIB 00:18:52.345 #undef SPDK_CONFIG_GOLANG 00:18:52.345 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:18:52.345 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:18:52.345 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:18:52.345 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:18:52.345 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:18:52.345 #undef SPDK_CONFIG_HAVE_LIBBSD 00:18:52.345 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:18:52.345 #define SPDK_CONFIG_IDXD 1 00:18:52.345 #define SPDK_CONFIG_IDXD_KERNEL 1 00:18:52.345 #undef SPDK_CONFIG_IPSEC_MB 00:18:52.345 #define SPDK_CONFIG_IPSEC_MB_DIR 00:18:52.345 #define SPDK_CONFIG_ISAL 1 00:18:52.345 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:18:52.345 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:18:52.345 #define SPDK_CONFIG_LIBDIR 00:18:52.345 #undef SPDK_CONFIG_LTO 00:18:52.345 #define SPDK_CONFIG_MAX_LCORES 128 00:18:52.345 #define SPDK_CONFIG_NVME_CUSE 1 00:18:52.345 #undef SPDK_CONFIG_OCF 00:18:52.345 #define SPDK_CONFIG_OCF_PATH 00:18:52.345 #define SPDK_CONFIG_OPENSSL_PATH 00:18:52.345 #undef SPDK_CONFIG_PGO_CAPTURE 00:18:52.345 #define SPDK_CONFIG_PGO_DIR 00:18:52.345 #undef SPDK_CONFIG_PGO_USE 00:18:52.345 #define SPDK_CONFIG_PREFIX /usr/local 00:18:52.345 #undef SPDK_CONFIG_RAID5F 00:18:52.345 #define SPDK_CONFIG_RBD 1 00:18:52.345 #define SPDK_CONFIG_RDMA 1 00:18:52.345 #define SPDK_CONFIG_RDMA_PROV verbs 00:18:52.345 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:18:52.345 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:18:52.345 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:18:52.345 #define SPDK_CONFIG_SHARED 1 00:18:52.345 #undef SPDK_CONFIG_SMA 00:18:52.345 #define SPDK_CONFIG_TESTS 1 00:18:52.345 #undef SPDK_CONFIG_TSAN 00:18:52.345 #define SPDK_CONFIG_UBLK 1 00:18:52.345 #define SPDK_CONFIG_UBSAN 1 00:18:52.345 #undef SPDK_CONFIG_UNIT_TESTS 00:18:52.345 #undef SPDK_CONFIG_URING 00:18:52.345 #define SPDK_CONFIG_URING_PATH 00:18:52.345 #undef SPDK_CONFIG_URING_ZNS 00:18:52.345 #undef SPDK_CONFIG_USDT 00:18:52.345 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:18:52.345 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:18:52.345 #undef SPDK_CONFIG_VFIO_USER 00:18:52.345 #define SPDK_CONFIG_VFIO_USER_DIR 00:18:52.345 #define SPDK_CONFIG_VHOST 1 00:18:52.345 #define SPDK_CONFIG_VIRTIO 1 00:18:52.345 #undef SPDK_CONFIG_VTUNE 00:18:52.345 #define SPDK_CONFIG_VTUNE_DIR 00:18:52.345 #define SPDK_CONFIG_WERROR 1 00:18:52.345 #define SPDK_CONFIG_WPDK_DIR 00:18:52.345 #undef SPDK_CONFIG_XNVME 00:18:52.345 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:18:52.345 02:14:00 iscsi_tgt.iscsi_tgt_ip_migration -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:18:52.345 02:14:00 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@17 -- # NETMASK=127.0.0.0/24 00:18:52.345 02:14:00 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@18 -- # MIGRATION_ADDRESS=127.0.0.2 00:18:52.345 Running ip migration tests 00:18:52.345 02:14:00 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@56 -- # echo 'Running ip migration tests' 00:18:52.345 02:14:00 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@57 -- # timing_enter start_iscsi_tgt_0 00:18:52.345 02:14:00 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:52.345 02:14:00 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@10 -- # set +x 00:18:52.345 02:14:00 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@58 -- # rpc_first_addr=/var/tmp/spdk0.sock 00:18:52.345 02:14:00 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@59 -- # iscsi_tgt_start /var/tmp/spdk0.sock 1 00:18:52.345 Process pid: 77295 00:18:52.345 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk0.sock... 00:18:52.345 02:14:00 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@39 -- # pid=77295 00:18:52.345 02:14:00 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/iscsi_tgt -r /var/tmp/spdk0.sock -m 1 --wait-for-rpc 00:18:52.345 02:14:00 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@40 -- # echo 'Process pid: 77295' 00:18:52.345 02:14:00 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@41 -- # pids+=($pid) 00:18:52.345 02:14:00 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@43 -- # trap 'kill_all_iscsi_target; exit 1' SIGINT SIGTERM EXIT 00:18:52.345 02:14:00 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@45 -- # waitforlisten 77295 /var/tmp/spdk0.sock 00:18:52.345 02:14:00 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@829 -- # '[' -z 77295 ']' 00:18:52.345 02:14:00 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk0.sock 00:18:52.345 02:14:00 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:52.345 02:14:00 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk0.sock...' 00:18:52.345 02:14:00 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:52.345 02:14:00 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@10 -- # set +x 00:18:52.345 [2024-07-23 02:14:01.039920] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:18:52.345 [2024-07-23 02:14:01.040465] [ DPDK EAL parameters: iscsi --no-shconf -c 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77295 ] 00:18:52.604 [2024-07-23 02:14:01.216747] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:52.863 [2024-07-23 02:14:01.478910] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:53.430 02:14:01 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:53.430 02:14:01 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@862 -- # return 0 00:18:53.430 02:14:01 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@46 -- # rpc_cmd -s /var/tmp/spdk0.sock iscsi_set_options -o 30 -a 64 00:18:53.430 02:14:01 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:53.430 02:14:01 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@10 -- # set +x 00:18:53.430 02:14:01 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:53.430 02:14:01 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@47 -- # rpc_cmd -s /var/tmp/spdk0.sock framework_start_init 00:18:53.430 02:14:01 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:53.430 02:14:01 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@10 -- # set +x 00:18:53.998 iscsi_tgt is listening. Running tests... 00:18:53.998 02:14:02 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:53.998 02:14:02 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@48 -- # echo 'iscsi_tgt is listening. Running tests...' 00:18:53.998 02:14:02 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@50 -- # rpc_cmd -s /var/tmp/spdk0.sock iscsi_create_initiator_group 2 ANY 127.0.0.0/24 00:18:53.998 02:14:02 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:53.998 02:14:02 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@10 -- # set +x 00:18:53.998 02:14:02 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:53.998 02:14:02 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@51 -- # rpc_cmd -s /var/tmp/spdk0.sock bdev_malloc_create 64 512 00:18:53.998 02:14:02 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:53.998 02:14:02 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@10 -- # set +x 00:18:53.998 Malloc0 00:18:53.998 02:14:02 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:53.998 02:14:02 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@53 -- # trap 'kill_all_iscsi_target; iscsitestfini; exit 1' SIGINT SIGTERM EXIT 00:18:53.998 02:14:02 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@60 -- # timing_exit start_iscsi_tgt_0 00:18:53.998 02:14:02 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:53.998 02:14:02 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@10 -- # set +x 00:18:53.998 02:14:02 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@62 -- # timing_enter start_iscsi_tgt_1 00:18:53.998 02:14:02 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:53.998 02:14:02 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@10 -- # set +x 00:18:53.998 Process pid: 77335 00:18:53.999 02:14:02 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@63 -- # rpc_second_addr=/var/tmp/spdk1.sock 00:18:53.999 02:14:02 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@64 -- # iscsi_tgt_start /var/tmp/spdk1.sock 2 00:18:53.999 02:14:02 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@39 -- # pid=77335 00:18:53.999 02:14:02 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/iscsi_tgt -r /var/tmp/spdk1.sock -m 2 --wait-for-rpc 00:18:53.999 02:14:02 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@40 -- # echo 'Process pid: 77335' 00:18:53.999 02:14:02 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@41 -- # pids+=($pid) 00:18:53.999 02:14:02 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@43 -- # trap 'kill_all_iscsi_target; exit 1' SIGINT SIGTERM EXIT 00:18:53.999 02:14:02 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@45 -- # waitforlisten 77335 /var/tmp/spdk1.sock 00:18:53.999 02:14:02 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@829 -- # '[' -z 77335 ']' 00:18:53.999 02:14:02 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk1.sock 00:18:53.999 02:14:02 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:53.999 02:14:02 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk1.sock...' 00:18:53.999 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk1.sock... 00:18:53.999 02:14:02 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:53.999 02:14:02 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@10 -- # set +x 00:18:54.258 [2024-07-23 02:14:02.843802] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:18:54.258 [2024-07-23 02:14:02.844279] [ DPDK EAL parameters: iscsi --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77335 ] 00:18:54.258 [2024-07-23 02:14:03.021940] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:54.517 [2024-07-23 02:14:03.269244] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:55.083 02:14:03 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:55.083 02:14:03 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@862 -- # return 0 00:18:55.083 02:14:03 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@46 -- # rpc_cmd -s /var/tmp/spdk1.sock iscsi_set_options -o 30 -a 64 00:18:55.083 02:14:03 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:55.083 02:14:03 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@10 -- # set +x 00:18:55.084 02:14:03 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:55.084 02:14:03 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@47 -- # rpc_cmd -s /var/tmp/spdk1.sock framework_start_init 00:18:55.084 02:14:03 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:55.084 02:14:03 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@10 -- # set +x 00:18:55.652 iscsi_tgt is listening. Running tests... 00:18:55.652 02:14:04 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:55.652 02:14:04 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@48 -- # echo 'iscsi_tgt is listening. Running tests...' 00:18:55.652 02:14:04 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@50 -- # rpc_cmd -s /var/tmp/spdk1.sock iscsi_create_initiator_group 2 ANY 127.0.0.0/24 00:18:55.652 02:14:04 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:55.652 02:14:04 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@10 -- # set +x 00:18:55.652 02:14:04 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:55.652 02:14:04 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@51 -- # rpc_cmd -s /var/tmp/spdk1.sock bdev_malloc_create 64 512 00:18:55.652 02:14:04 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:55.652 02:14:04 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@10 -- # set +x 00:18:55.652 Malloc0 00:18:55.652 02:14:04 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:55.652 02:14:04 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@53 -- # trap 'kill_all_iscsi_target; iscsitestfini; exit 1' SIGINT SIGTERM EXIT 00:18:55.652 02:14:04 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@65 -- # timing_exit start_iscsi_tgt_1 00:18:55.652 02:14:04 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:55.652 02:14:04 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@10 -- # set +x 00:18:55.910 02:14:04 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@67 -- # rpc_add_target_node /var/tmp/spdk0.sock 00:18:55.910 02:14:04 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@28 -- # ip netns exec spdk_iscsi_ns ip addr add 127.0.0.2/24 dev spdk_tgt_int 00:18:55.910 02:14:04 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@29 -- # rpc_cmd -s /var/tmp/spdk0.sock iscsi_create_portal_group 1 127.0.0.2:3260 00:18:55.910 02:14:04 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:55.910 02:14:04 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@10 -- # set +x 00:18:55.910 02:14:04 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:55.910 02:14:04 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@30 -- # rpc_cmd -s /var/tmp/spdk0.sock iscsi_create_target_node target1 target1_alias Malloc0:0 1:2 64 -d 00:18:55.910 02:14:04 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:55.910 02:14:04 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@10 -- # set +x 00:18:55.910 02:14:04 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:55.910 02:14:04 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@31 -- # ip netns exec spdk_iscsi_ns ip addr del 127.0.0.2/24 dev spdk_tgt_int 00:18:55.911 02:14:04 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@69 -- # sleep 1 00:18:56.846 02:14:05 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@70 -- # iscsiadm -m discovery -t sendtargets -p 127.0.0.2:3260 00:18:56.846 127.0.0.2:3260,1 iqn.2016-06.io.spdk:target1 00:18:56.846 02:14:05 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@71 -- # sleep 1 00:18:57.783 02:14:06 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@72 -- # iscsiadm -m node --login -p 127.0.0.2:3260 00:18:57.783 Logging in to [iface: default, target: iqn.2016-06.io.spdk:target1, portal: 127.0.0.2,3260] 00:18:57.783 Login to [iface: default, target: iqn.2016-06.io.spdk:target1, portal: 127.0.0.2,3260] successful. 00:18:57.784 02:14:06 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@73 -- # waitforiscsidevices 1 00:18:57.784 02:14:06 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@116 -- # local num=1 00:18:57.784 02:14:06 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@118 -- # (( i = 1 )) 00:18:57.784 02:14:06 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@118 -- # (( i <= 20 )) 00:18:57.784 02:14:06 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@119 -- # iscsiadm -m session -P 3 00:18:57.784 02:14:06 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@119 -- # grep -c 'Attached scsi disk sd[a-z]*' 00:18:57.784 [2024-07-23 02:14:06.554942] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:18:57.784 02:14:06 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@119 -- # n=1 00:18:57.784 02:14:06 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@120 -- # '[' 1 -ne 1 ']' 00:18:57.784 02:14:06 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@123 -- # return 0 00:18:57.784 02:14:06 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@77 -- # fiopid=77417 00:18:57.784 02:14:06 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@78 -- # sleep 3 00:18:57.784 02:14:06 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p iscsi -i 4096 -d 32 -t randrw -r 12 00:18:58.042 [global] 00:18:58.042 thread=1 00:18:58.042 invalidate=1 00:18:58.042 rw=randrw 00:18:58.042 time_based=1 00:18:58.042 runtime=12 00:18:58.042 ioengine=libaio 00:18:58.042 direct=1 00:18:58.042 bs=4096 00:18:58.042 iodepth=32 00:18:58.042 norandommap=1 00:18:58.042 numjobs=1 00:18:58.042 00:18:58.042 [job0] 00:18:58.042 filename=/dev/sda 00:18:58.042 queue_depth set to 113 (sda) 00:18:58.042 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=32 00:18:58.042 fio-3.35 00:18:58.042 Starting 1 thread 00:18:58.042 [2024-07-23 02:14:06.734414] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:19:01.324 02:14:09 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@80 -- # rpc_cmd -s /var/tmp/spdk0.sock spdk_kill_instance SIGTERM 00:19:01.324 02:14:09 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:01.324 02:14:09 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@10 -- # set +x 00:19:01.891 02:14:10 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:01.891 02:14:10 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@81 -- # wait 77295 00:19:02.825 02:14:11 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@83 -- # rpc_add_target_node /var/tmp/spdk1.sock 00:19:02.825 02:14:11 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@28 -- # ip netns exec spdk_iscsi_ns ip addr add 127.0.0.2/24 dev spdk_tgt_int 00:19:02.825 02:14:11 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@29 -- # rpc_cmd -s /var/tmp/spdk1.sock iscsi_create_portal_group 1 127.0.0.2:3260 00:19:02.825 02:14:11 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:02.825 02:14:11 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@10 -- # set +x 00:19:02.825 02:14:11 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:02.825 02:14:11 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@30 -- # rpc_cmd -s /var/tmp/spdk1.sock iscsi_create_target_node target1 target1_alias Malloc0:0 1:2 64 -d 00:19:02.825 02:14:11 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:02.825 02:14:11 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@10 -- # set +x 00:19:02.825 02:14:11 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:02.825 02:14:11 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@31 -- # ip netns exec spdk_iscsi_ns ip addr del 127.0.0.2/24 dev spdk_tgt_int 00:19:02.825 02:14:11 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@85 -- # wait 77417 00:19:10.939 [2024-07-23 02:14:18.845216] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:19:10.939 00:19:10.939 job0: (groupid=0, jobs=1): err= 0: pid=77444: Tue Jul 23 02:14:18 2024 00:19:10.939 read: IOPS=12.6k, BW=49.0MiB/s (51.4MB/s)(588MiB/12001msec) 00:19:10.939 slat (nsec): min=2534, max=66726, avg=5599.07, stdev=4536.71 00:19:10.939 clat (usec): min=258, max=2008.3k, avg=1279.21, stdev=20682.24 00:19:10.939 lat (usec): min=303, max=2008.3k, avg=1284.81, stdev=20682.25 00:19:10.939 clat percentiles (usec): 00:19:10.939 | 1.00th=[ 693], 5.00th=[ 807], 10.00th=[ 865], 00:19:10.939 | 20.00th=[ 930], 30.00th=[ 963], 40.00th=[ 996], 00:19:10.939 | 50.00th=[ 1037], 60.00th=[ 1074], 70.00th=[ 1123], 00:19:10.939 | 80.00th=[ 1205], 90.00th=[ 1319], 95.00th=[ 1401], 00:19:10.939 | 99.00th=[ 1614], 99.50th=[ 1811], 99.90th=[ 2868], 00:19:10.939 | 99.95th=[ 3261], 99.99th=[2004878] 00:19:10.939 bw ( KiB/s): min=26464, max=64112, per=100.00%, avg=57328.80, stdev=9836.29, samples=20 00:19:10.939 iops : min= 6616, max=16028, avg=14332.20, stdev=2459.07, samples=20 00:19:10.939 write: IOPS=12.6k, BW=49.1MiB/s (51.5MB/s)(589MiB/12001msec); 0 zone resets 00:19:10.939 slat (nsec): min=2546, max=66581, avg=5723.70, stdev=4571.03 00:19:10.939 clat (usec): min=482, max=2008.3k, avg=1255.58, stdev=20669.64 00:19:10.939 lat (usec): min=497, max=2008.3k, avg=1261.30, stdev=20669.64 00:19:10.939 clat percentiles (usec): 00:19:10.939 | 1.00th=[ 668], 5.00th=[ 783], 10.00th=[ 832], 00:19:10.939 | 20.00th=[ 889], 30.00th=[ 930], 40.00th=[ 963], 00:19:10.939 | 50.00th=[ 1004], 60.00th=[ 1057], 70.00th=[ 1106], 00:19:10.939 | 80.00th=[ 1188], 90.00th=[ 1303], 95.00th=[ 1385], 00:19:10.939 | 99.00th=[ 1614], 99.50th=[ 1827], 99.90th=[ 2835], 00:19:10.939 | 99.95th=[ 3261], 99.99th=[2004878] 00:19:10.939 bw ( KiB/s): min=25552, max=63416, per=100.00%, avg=57394.00, stdev=10186.90, samples=20 00:19:10.939 iops : min= 6388, max=15854, avg=14348.50, stdev=2546.72, samples=20 00:19:10.940 lat (usec) : 500=0.01%, 750=3.05%, 1000=41.37% 00:19:10.940 lat (msec) : 2=55.31%, 4=0.26%, 10=0.01%, >=2000=0.01% 00:19:10.940 cpu : usr=7.35%, sys=12.43%, ctx=23270, majf=0, minf=1 00:19:10.940 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=100.0%, >=64=0.0% 00:19:10.940 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:10.940 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0% 00:19:10.940 issued rwts: total=150633,150820,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:10.940 latency : target=0, window=0, percentile=100.00%, depth=32 00:19:10.940 00:19:10.940 Run status group 0 (all jobs): 00:19:10.940 READ: bw=49.0MiB/s (51.4MB/s), 49.0MiB/s-49.0MiB/s (51.4MB/s-51.4MB/s), io=588MiB (617MB), run=12001-12001msec 00:19:10.940 WRITE: bw=49.1MiB/s (51.5MB/s), 49.1MiB/s-49.1MiB/s (51.5MB/s-51.5MB/s), io=589MiB (618MB), run=12001-12001msec 00:19:10.940 00:19:10.940 Disk stats (read/write): 00:19:10.940 sda: ios=148998/149118, merge=0/0, ticks=176454/179014, in_queue=355468, util=99.32% 00:19:10.940 02:14:18 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@87 -- # trap - SIGINT SIGTERM EXIT 00:19:10.940 02:14:18 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@89 -- # iscsicleanup 00:19:10.940 02:14:18 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@980 -- # echo 'Cleaning up iSCSI connection' 00:19:10.940 Cleaning up iSCSI connection 00:19:10.940 02:14:18 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@981 -- # iscsiadm -m node --logout 00:19:10.940 Logging out of session [sid: 21, target: iqn.2016-06.io.spdk:target1, portal: 127.0.0.2,3260] 00:19:10.940 Logout of [sid: 21, target: iqn.2016-06.io.spdk:target1, portal: 127.0.0.2,3260] successful. 00:19:10.940 02:14:18 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@982 -- # iscsiadm -m node -o delete 00:19:10.940 02:14:18 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@983 -- # rm -rf 00:19:10.940 02:14:18 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@91 -- # rpc_cmd -s /var/tmp/spdk1.sock spdk_kill_instance SIGTERM 00:19:10.940 02:14:18 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:10.940 02:14:18 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@10 -- # set +x 00:19:11.197 02:14:19 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:11.197 02:14:19 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@92 -- # wait 77335 00:19:12.132 02:14:20 iscsi_tgt.iscsi_tgt_ip_migration -- ip_migration/ip_migration.sh@93 -- # iscsitestfini 00:19:12.132 02:14:20 iscsi_tgt.iscsi_tgt_ip_migration -- iscsi_tgt/common.sh@131 -- # '[' '' == iso ']' 00:19:12.132 00:19:12.132 real 0m20.143s 00:19:12.132 user 0m26.562s 00:19:12.132 sys 0m4.678s 00:19:12.132 02:14:20 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:12.132 02:14:20 iscsi_tgt.iscsi_tgt_ip_migration -- common/autotest_common.sh@10 -- # set +x 00:19:12.390 ************************************ 00:19:12.390 END TEST iscsi_tgt_ip_migration 00:19:12.390 ************************************ 00:19:12.390 02:14:20 iscsi_tgt -- common/autotest_common.sh@1142 -- # return 0 00:19:12.390 02:14:20 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@40 -- # run_test iscsi_tgt_trace_record /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/trace_record/trace_record.sh 00:19:12.390 02:14:20 iscsi_tgt -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:19:12.390 02:14:20 iscsi_tgt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:12.390 02:14:20 iscsi_tgt -- common/autotest_common.sh@10 -- # set +x 00:19:12.390 ************************************ 00:19:12.390 START TEST iscsi_tgt_trace_record 00:19:12.390 ************************************ 00:19:12.390 02:14:20 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/trace_record/trace_record.sh 00:19:12.390 * Looking for test storage... 00:19:12.390 * Found test storage at /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/trace_record 00:19:12.390 02:14:21 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:19:12.390 02:14:21 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:19:12.391 02:14:21 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:19:12.391 02:14:21 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:19:12.391 02:14:21 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:19:12.391 02:14:21 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:19:12.391 02:14:21 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:19:12.391 02:14:21 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:19:12.391 02:14:21 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:19:12.391 02:14:21 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:19:12.391 02:14:21 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:19:12.391 02:14:21 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:19:12.391 02:14:21 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:19:12.391 02:14:21 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:19:12.391 02:14:21 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:19:12.391 02:14:21 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:19:12.391 02:14:21 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:19:12.391 02:14:21 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:19:12.391 02:14:21 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:19:12.391 02:14:21 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:19:12.391 02:14:21 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@11 -- # iscsitestinit 00:19:12.391 02:14:21 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@109 -- # '[' '' == iso ']' 00:19:12.391 02:14:21 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@13 -- # TRACE_TMP_FOLDER=./tmp-trace 00:19:12.391 02:14:21 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@14 -- # TRACE_RECORD_OUTPUT=./tmp-trace/record.trace 00:19:12.391 02:14:21 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@15 -- # TRACE_RECORD_NOTICE_LOG=./tmp-trace/record.notice 00:19:12.391 02:14:21 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@16 -- # TRACE_TOOL_LOG=./tmp-trace/trace.log 00:19:12.391 02:14:21 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@22 -- # '[' -z 10.0.0.1 ']' 00:19:12.391 02:14:21 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@27 -- # '[' -z 10.0.0.2 ']' 00:19:12.391 02:14:21 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@32 -- # NUM_TRACE_ENTRIES=4096 00:19:12.391 02:14:21 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@33 -- # MALLOC_BDEV_SIZE=64 00:19:12.391 02:14:21 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@34 -- # MALLOC_BLOCK_SIZE=4096 00:19:12.391 02:14:21 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@36 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:12.391 02:14:21 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@37 -- # fio_py=/home/vagrant/spdk_repo/spdk/scripts/fio-wrapper 00:19:12.391 02:14:21 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@39 -- # timing_enter start_iscsi_tgt 00:19:12.391 02:14:21 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:12.391 02:14:21 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@10 -- # set +x 00:19:12.391 start iscsi_tgt with trace enabled 00:19:12.391 02:14:21 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@41 -- # echo 'start iscsi_tgt with trace enabled' 00:19:12.391 Process pid: 77656 00:19:12.391 02:14:21 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@43 -- # iscsi_pid=77656 00:19:12.391 02:14:21 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@42 -- # ip netns exec spdk_iscsi_ns /home/vagrant/spdk_repo/spdk/build/bin/iscsi_tgt -m 0xf --num-trace-entries 4096 --tpoint-group all 00:19:12.391 02:14:21 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@44 -- # echo 'Process pid: 77656' 00:19:12.391 02:14:21 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@46 -- # trap 'killprocess $iscsi_pid; iscsitestfini; exit 1' SIGINT SIGTERM EXIT 00:19:12.391 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:12.391 02:14:21 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@48 -- # waitforlisten 77656 00:19:12.391 02:14:21 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@829 -- # '[' -z 77656 ']' 00:19:12.391 02:14:21 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:12.391 02:14:21 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:12.391 02:14:21 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:12.391 02:14:21 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:12.391 02:14:21 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@10 -- # set +x 00:19:12.653 [2024-07-23 02:14:21.207210] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:19:12.653 [2024-07-23 02:14:21.207758] [ DPDK EAL parameters: iscsi --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77656 ] 00:19:12.653 [2024-07-23 02:14:21.387002] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:12.944 [2024-07-23 02:14:21.633466] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask all specified. 00:19:12.944 [2024-07-23 02:14:21.633703] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s iscsi -p 77656' to capture a snapshot of events at runtime. 00:19:12.944 [2024-07-23 02:14:21.633748] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:12.944 [2024-07-23 02:14:21.633771] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:12.944 [2024-07-23 02:14:21.633796] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/iscsi_trace.pid77656 for offline analysis/debug. 00:19:12.944 [2024-07-23 02:14:21.633990] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:12.944 [2024-07-23 02:14:21.634201] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:12.944 [2024-07-23 02:14:21.634339] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:19:12.944 [2024-07-23 02:14:21.634228] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:13.881 02:14:22 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:13.882 02:14:22 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@862 -- # return 0 00:19:13.882 02:14:22 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@50 -- # echo 'iscsi_tgt is listening. Running tests...' 00:19:13.882 iscsi_tgt is listening. Running tests... 00:19:13.882 02:14:22 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@52 -- # timing_exit start_iscsi_tgt 00:19:13.882 02:14:22 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:13.882 02:14:22 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@10 -- # set +x 00:19:13.882 02:14:22 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@54 -- # mkdir -p ./tmp-trace 00:19:13.882 02:14:22 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@56 -- # record_pid=77691 00:19:13.882 02:14:22 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_trace_record -s iscsi -p 77656 -f ./tmp-trace/record.trace -q 00:19:13.882 02:14:22 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@57 -- # echo 'Trace record pid: 77691' 00:19:13.882 Trace record pid: 77691 00:19:13.882 02:14:22 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@59 -- # RPCS= 00:19:13.882 02:14:22 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@60 -- # RPCS+='iscsi_create_portal_group 1 10.0.0.1:3260\n' 00:19:13.882 02:14:22 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@61 -- # RPCS+='iscsi_create_initiator_group 2 ANY 10.0.0.2/32\n' 00:19:13.882 02:14:22 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@63 -- # echo 'Create bdevs and target nodes' 00:19:13.882 Create bdevs and target nodes 00:19:13.882 02:14:22 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@64 -- # CONNECTION_NUMBER=15 00:19:13.882 02:14:22 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@65 -- # seq 0 15 00:19:13.882 02:14:22 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@65 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:19:13.882 02:14:22 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@66 -- # RPCS+='bdev_malloc_create 64 4096 -b Malloc0\n' 00:19:13.882 02:14:22 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@67 -- # RPCS+='iscsi_create_target_node Target0 Target0_alias Malloc0:0 1:2 256 -d\n' 00:19:13.882 02:14:22 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@65 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:19:13.882 02:14:22 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@66 -- # RPCS+='bdev_malloc_create 64 4096 -b Malloc1\n' 00:19:13.882 02:14:22 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@67 -- # RPCS+='iscsi_create_target_node Target1 Target1_alias Malloc1:0 1:2 256 -d\n' 00:19:13.882 02:14:22 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@65 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:19:13.882 02:14:22 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@66 -- # RPCS+='bdev_malloc_create 64 4096 -b Malloc2\n' 00:19:13.882 02:14:22 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@67 -- # RPCS+='iscsi_create_target_node Target2 Target2_alias Malloc2:0 1:2 256 -d\n' 00:19:13.882 02:14:22 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@65 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:19:13.882 02:14:22 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@66 -- # RPCS+='bdev_malloc_create 64 4096 -b Malloc3\n' 00:19:13.882 02:14:22 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@67 -- # RPCS+='iscsi_create_target_node Target3 Target3_alias Malloc3:0 1:2 256 -d\n' 00:19:13.882 02:14:22 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@65 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:19:13.882 02:14:22 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@66 -- # RPCS+='bdev_malloc_create 64 4096 -b Malloc4\n' 00:19:13.882 02:14:22 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@67 -- # RPCS+='iscsi_create_target_node Target4 Target4_alias Malloc4:0 1:2 256 -d\n' 00:19:13.882 02:14:22 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@65 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:19:13.882 02:14:22 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@66 -- # RPCS+='bdev_malloc_create 64 4096 -b Malloc5\n' 00:19:13.882 02:14:22 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@67 -- # RPCS+='iscsi_create_target_node Target5 Target5_alias Malloc5:0 1:2 256 -d\n' 00:19:13.882 02:14:22 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@65 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:19:13.882 02:14:22 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@66 -- # RPCS+='bdev_malloc_create 64 4096 -b Malloc6\n' 00:19:13.882 02:14:22 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@67 -- # RPCS+='iscsi_create_target_node Target6 Target6_alias Malloc6:0 1:2 256 -d\n' 00:19:13.882 02:14:22 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@65 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:19:13.882 02:14:22 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@66 -- # RPCS+='bdev_malloc_create 64 4096 -b Malloc7\n' 00:19:13.882 02:14:22 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@67 -- # RPCS+='iscsi_create_target_node Target7 Target7_alias Malloc7:0 1:2 256 -d\n' 00:19:13.882 02:14:22 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@65 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:19:13.882 02:14:22 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@66 -- # RPCS+='bdev_malloc_create 64 4096 -b Malloc8\n' 00:19:13.882 02:14:22 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@67 -- # RPCS+='iscsi_create_target_node Target8 Target8_alias Malloc8:0 1:2 256 -d\n' 00:19:13.882 02:14:22 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@65 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:19:13.882 02:14:22 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@66 -- # RPCS+='bdev_malloc_create 64 4096 -b Malloc9\n' 00:19:13.882 02:14:22 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@67 -- # RPCS+='iscsi_create_target_node Target9 Target9_alias Malloc9:0 1:2 256 -d\n' 00:19:13.882 02:14:22 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@65 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:19:13.882 02:14:22 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@66 -- # RPCS+='bdev_malloc_create 64 4096 -b Malloc10\n' 00:19:13.882 02:14:22 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@67 -- # RPCS+='iscsi_create_target_node Target10 Target10_alias Malloc10:0 1:2 256 -d\n' 00:19:13.882 02:14:22 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@65 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:19:13.882 02:14:22 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@66 -- # RPCS+='bdev_malloc_create 64 4096 -b Malloc11\n' 00:19:13.882 02:14:22 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@67 -- # RPCS+='iscsi_create_target_node Target11 Target11_alias Malloc11:0 1:2 256 -d\n' 00:19:13.882 02:14:22 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@65 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:19:13.882 02:14:22 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@66 -- # RPCS+='bdev_malloc_create 64 4096 -b Malloc12\n' 00:19:13.882 02:14:22 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@67 -- # RPCS+='iscsi_create_target_node Target12 Target12_alias Malloc12:0 1:2 256 -d\n' 00:19:13.882 02:14:22 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@65 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:19:13.882 02:14:22 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@66 -- # RPCS+='bdev_malloc_create 64 4096 -b Malloc13\n' 00:19:13.882 02:14:22 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@67 -- # RPCS+='iscsi_create_target_node Target13 Target13_alias Malloc13:0 1:2 256 -d\n' 00:19:13.882 02:14:22 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@65 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:19:13.882 02:14:22 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@66 -- # RPCS+='bdev_malloc_create 64 4096 -b Malloc14\n' 00:19:13.882 02:14:22 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@67 -- # RPCS+='iscsi_create_target_node Target14 Target14_alias Malloc14:0 1:2 256 -d\n' 00:19:13.882 02:14:22 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@65 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:19:13.882 02:14:22 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@66 -- # RPCS+='bdev_malloc_create 64 4096 -b Malloc15\n' 00:19:13.882 02:14:22 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@67 -- # RPCS+='iscsi_create_target_node Target15 Target15_alias Malloc15:0 1:2 256 -d\n' 00:19:13.882 02:14:22 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:13.882 02:14:22 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@69 -- # echo -e iscsi_create_portal_group 1 '10.0.0.1:3260\niscsi_create_initiator_group' 2 ANY '10.0.0.2/32\nbdev_malloc_create' 64 4096 -b 'Malloc0\niscsi_create_target_node' Target0 Target0_alias Malloc0:0 1:2 256 '-d\nbdev_malloc_create' 64 4096 -b 'Malloc1\niscsi_create_target_node' Target1 Target1_alias Malloc1:0 1:2 256 '-d\nbdev_malloc_create' 64 4096 -b 'Malloc2\niscsi_create_target_node' Target2 Target2_alias Malloc2:0 1:2 256 '-d\nbdev_malloc_create' 64 4096 -b 'Malloc3\niscsi_create_target_node' Target3 Target3_alias Malloc3:0 1:2 256 '-d\nbdev_malloc_create' 64 4096 -b 'Malloc4\niscsi_create_target_node' Target4 Target4_alias Malloc4:0 1:2 256 '-d\nbdev_malloc_create' 64 4096 -b 'Malloc5\niscsi_create_target_node' Target5 Target5_alias Malloc5:0 1:2 256 '-d\nbdev_malloc_create' 64 4096 -b 'Malloc6\niscsi_create_target_node' Target6 Target6_alias Malloc6:0 1:2 256 '-d\nbdev_malloc_create' 64 4096 -b 'Malloc7\niscsi_create_target_node' Target7 Target7_alias Malloc7:0 1:2 256 '-d\nbdev_malloc_create' 64 4096 -b 'Malloc8\niscsi_create_target_node' Target8 Target8_alias Malloc8:0 1:2 256 '-d\nbdev_malloc_create' 64 4096 -b 'Malloc9\niscsi_create_target_node' Target9 Target9_alias Malloc9:0 1:2 256 '-d\nbdev_malloc_create' 64 4096 -b 'Malloc10\niscsi_create_target_node' Target10 Target10_alias Malloc10:0 1:2 256 '-d\nbdev_malloc_create' 64 4096 -b 'Malloc11\niscsi_create_target_node' Target11 Target11_alias Malloc11:0 1:2 256 '-d\nbdev_malloc_create' 64 4096 -b 'Malloc12\niscsi_create_target_node' Target12 Target12_alias Malloc12:0 1:2 256 '-d\nbdev_malloc_create' 64 4096 -b 'Malloc13\niscsi_create_target_node' Target13 Target13_alias Malloc13:0 1:2 256 '-d\nbdev_malloc_create' 64 4096 -b 'Malloc14\niscsi_create_target_node' Target14 Target14_alias Malloc14:0 1:2 256 '-d\nbdev_malloc_create' 64 4096 -b 'Malloc15\niscsi_create_target_node' Target15 Target15_alias Malloc15:0 1:2 256 '-d\n' 00:19:15.259 Malloc0 00:19:15.259 Malloc1 00:19:15.259 Malloc2 00:19:15.259 Malloc3 00:19:15.259 Malloc4 00:19:15.259 Malloc5 00:19:15.259 Malloc6 00:19:15.259 Malloc7 00:19:15.259 Malloc8 00:19:15.259 Malloc9 00:19:15.259 Malloc10 00:19:15.259 Malloc11 00:19:15.259 Malloc12 00:19:15.259 Malloc13 00:19:15.259 Malloc14 00:19:15.259 Malloc15 00:19:15.259 02:14:23 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@71 -- # sleep 1 00:19:16.195 02:14:24 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@73 -- # iscsiadm -m discovery -t sendtargets -p 10.0.0.1:3260 00:19:16.195 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target0 00:19:16.195 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target1 00:19:16.196 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target2 00:19:16.196 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target3 00:19:16.196 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target4 00:19:16.196 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target5 00:19:16.196 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target6 00:19:16.196 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target7 00:19:16.196 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target8 00:19:16.196 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target9 00:19:16.196 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target10 00:19:16.196 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target11 00:19:16.196 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target12 00:19:16.196 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target13 00:19:16.196 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target14 00:19:16.196 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target15 00:19:16.196 02:14:24 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@74 -- # iscsiadm -m node --login -p 10.0.0.1:3260 00:19:16.454 [2024-07-23 02:14:25.016120] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:19:16.454 [2024-07-23 02:14:25.047018] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:19:16.454 [2024-07-23 02:14:25.055402] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:19:16.454 [2024-07-23 02:14:25.073245] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:19:16.454 [2024-07-23 02:14:25.142707] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:19:16.454 [2024-07-23 02:14:25.143794] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:19:16.454 [2024-07-23 02:14:25.162516] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:19:16.454 [2024-07-23 02:14:25.231207] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:19:16.714 [2024-07-23 02:14:25.256337] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:19:16.714 [2024-07-23 02:14:25.260246] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:19:16.714 [2024-07-23 02:14:25.304674] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:19:16.714 [2024-07-23 02:14:25.337739] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:19:16.714 [2024-07-23 02:14:25.353961] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:19:16.714 [2024-07-23 02:14:25.378043] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:19:16.714 [2024-07-23 02:14:25.412596] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:19:16.714 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target0, portal: 10.0.0.1,3260] 00:19:16.714 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] 00:19:16.714 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target2, portal: 10.0.0.1,3260] 00:19:16.714 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] 00:19:16.714 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target4, portal: 10.0.0.1,3260] 00:19:16.714 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target5, portal: 10.0.0.1,3260] 00:19:16.714 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target6, portal: 10.0.0.1,3260] 00:19:16.714 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target7, portal: 10.0.0.1,3260] 00:19:16.714 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target8, portal: 10.0.0.1,3260] 00:19:16.714 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target9, portal: 10.0.0.1,3260] 00:19:16.714 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target10, portal: 10.0.0.1,3260] 00:19:16.714 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target11, portal: 10.0.0.1,3260] 00:19:16.714 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target12, portal: 10.0.0.1,3260] 00:19:16.714 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target13, portal: 10.0.0.1,3260] 00:19:16.714 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target14, portal: 10.0.0.1,3260] 00:19:16.714 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target15, portal: 10.0.0.1,3260] 00:19:16.714 Login to [iface: default, target: iqn.2016-06.io.spdk:Target0, portal: 10.0.0.1,3260] successful. 00:19:16.714 Login to [iface: default, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] successful. 00:19:16.714 Login to [iface: default, target: iqn.2016-06.io.spdk:Target2, portal: 10.0.0.1,3260] successful. 00:19:16.714 Login to [iface: default, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] successful. 00:19:16.714 Login to [iface: default, target: iqn.2016-06.io.spdk:Target4, portal: 10.0.0.1,3260] successful. 00:19:16.714 Login to [iface: default, target: iqn.2016-06.io.spdk:Target5, portal: 10.0.0.1,3260] successful. 00:19:16.714 Login to [iface: default, target: iqn.2016-06.io.spdk:Target6, portal: 10.0.0.1,3260] successful. 00:19:16.714 Login to [iface: default, target: iqn.2016-06.io.spdk:Target7, portal: 10.0.0.1,3260] successful. 00:19:16.714 Login to [iface: default, target: iqn.2016-06.io.spdk:Target8, portal: 10.0.0.1,3260] successful. 00:19:16.714 Login to [iface: default, target: iqn.2016-06.io.spdk:Target9, portal: 10.0.0.1,3260] successful. 00:19:16.714 Login to [iface: default, target: iqn.2016-06.io.spdk:Target10, portal: 10.0.0.1,3260] successful. 00:19:16.714 Login to [iface: default, target: iqn.2016-06.io.spdk:Target11, portal: 10.0.0.1,3260] successful. 00:19:16.714 Login to [iface: default, target: iqn.2016-06.io.spdk:Target12, portal: 10.0.0.1,3260] successful. 00:19:16.714 Login to [iface: default, target: iqn.2016-06.io.spdk:Target13, portal: 10.0.0.1,3260] successful. 00:19:16.714 Login to [iface: default, target: iqn.2016-06.io.spdk:Target14, portal: 10.0.0.1,3260] successful. 00:19:16.714 Login to [iface: default, target: iqn.2016-06.io.spdk:Target15, portal: 10.0.0.1,3260] successful. 00:19:16.714 [2024-07-23 02:14:25.431348] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:19:16.714 02:14:25 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@75 -- # waitforiscsidevices 16 00:19:16.714 02:14:25 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@116 -- # local num=16 00:19:16.714 02:14:25 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@118 -- # (( i = 1 )) 00:19:16.714 02:14:25 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@118 -- # (( i <= 20 )) 00:19:16.714 02:14:25 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@119 -- # grep -c 'Attached scsi disk sd[a-z]*' 00:19:16.714 02:14:25 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@119 -- # iscsiadm -m session -P 3 00:19:16.714 02:14:25 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@119 -- # n=16 00:19:16.714 02:14:25 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@120 -- # '[' 16 -ne 16 ']' 00:19:16.714 02:14:25 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@123 -- # return 0 00:19:16.714 02:14:25 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@77 -- # trap 'iscsicleanup; killprocess $iscsi_pid; killprocess $record_pid; delete_tmp_files; iscsitestfini; exit 1' SIGINT SIGTERM EXIT 00:19:16.714 Running FIO 00:19:16.714 02:14:25 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@79 -- # echo 'Running FIO' 00:19:16.714 02:14:25 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p iscsi -i 131072 -d 32 -t randrw -r 1 00:19:16.973 [global] 00:19:16.973 thread=1 00:19:16.973 invalidate=1 00:19:16.973 rw=randrw 00:19:16.973 time_based=1 00:19:16.973 runtime=1 00:19:16.973 ioengine=libaio 00:19:16.973 direct=1 00:19:16.973 bs=131072 00:19:16.973 iodepth=32 00:19:16.973 norandommap=1 00:19:16.973 numjobs=1 00:19:16.973 00:19:16.973 [job0] 00:19:16.973 filename=/dev/sda 00:19:16.973 [job1] 00:19:16.973 filename=/dev/sdb 00:19:16.973 [job2] 00:19:16.973 filename=/dev/sdc 00:19:16.973 [job3] 00:19:16.973 filename=/dev/sdf 00:19:16.973 [job4] 00:19:16.973 filename=/dev/sdd 00:19:16.973 [job5] 00:19:16.973 filename=/dev/sde 00:19:16.973 [job6] 00:19:16.973 filename=/dev/sdg 00:19:16.973 [job7] 00:19:16.973 filename=/dev/sdh 00:19:16.973 [job8] 00:19:16.973 filename=/dev/sdi 00:19:16.973 [job9] 00:19:16.973 filename=/dev/sdj 00:19:16.973 [job10] 00:19:16.973 filename=/dev/sdk 00:19:16.973 [job11] 00:19:16.973 filename=/dev/sdl 00:19:16.973 [job12] 00:19:16.973 filename=/dev/sdm 00:19:16.973 [job13] 00:19:16.973 filename=/dev/sdn 00:19:16.973 [job14] 00:19:16.973 filename=/dev/sdo 00:19:16.973 [job15] 00:19:16.973 filename=/dev/sdp 00:19:16.973 queue_depth set to 113 (sda) 00:19:17.232 queue_depth set to 113 (sdb) 00:19:17.232 queue_depth set to 113 (sdc) 00:19:17.232 queue_depth set to 113 (sdf) 00:19:17.232 queue_depth set to 113 (sdd) 00:19:17.232 queue_depth set to 113 (sde) 00:19:17.232 queue_depth set to 113 (sdg) 00:19:17.232 queue_depth set to 113 (sdh) 00:19:17.232 queue_depth set to 113 (sdi) 00:19:17.232 queue_depth set to 113 (sdj) 00:19:17.232 queue_depth set to 113 (sdk) 00:19:17.232 queue_depth set to 113 (sdl) 00:19:17.232 queue_depth set to 113 (sdm) 00:19:17.491 queue_depth set to 113 (sdn) 00:19:17.491 queue_depth set to 113 (sdo) 00:19:17.491 queue_depth set to 113 (sdp) 00:19:17.491 job0: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=32 00:19:17.491 job1: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=32 00:19:17.491 job2: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=32 00:19:17.491 job3: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=32 00:19:17.491 job4: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=32 00:19:17.491 job5: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=32 00:19:17.491 job6: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=32 00:19:17.491 job7: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=32 00:19:17.491 job8: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=32 00:19:17.491 job9: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=32 00:19:17.491 job10: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=32 00:19:17.491 job11: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=32 00:19:17.491 job12: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=32 00:19:17.491 job13: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=32 00:19:17.491 job14: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=32 00:19:17.491 job15: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=32 00:19:17.491 fio-3.35 00:19:17.491 Starting 16 threads 00:19:17.491 [2024-07-23 02:14:26.181634] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:19:17.491 [2024-07-23 02:14:26.185461] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:19:17.491 [2024-07-23 02:14:26.189106] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:19:17.491 [2024-07-23 02:14:26.193757] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:19:17.491 [2024-07-23 02:14:26.196833] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:19:17.491 [2024-07-23 02:14:26.199370] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:19:17.491 [2024-07-23 02:14:26.201892] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:19:17.491 [2024-07-23 02:14:26.205393] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:19:17.491 [2024-07-23 02:14:26.208150] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:19:17.491 [2024-07-23 02:14:26.210697] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:19:17.491 [2024-07-23 02:14:26.213397] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:19:17.491 [2024-07-23 02:14:26.217145] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:19:17.491 [2024-07-23 02:14:26.219847] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:19:17.491 [2024-07-23 02:14:26.222456] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:19:17.491 [2024-07-23 02:14:26.225034] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:19:17.491 [2024-07-23 02:14:26.228402] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:19:18.870 [2024-07-23 02:14:27.587044] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:19:18.870 [2024-07-23 02:14:27.593848] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:19:18.870 [2024-07-23 02:14:27.597241] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:19:18.870 [2024-07-23 02:14:27.600641] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:19:18.870 [2024-07-23 02:14:27.603179] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:19:18.870 [2024-07-23 02:14:27.606012] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:19:18.870 [2024-07-23 02:14:27.608355] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:19:18.870 [2024-07-23 02:14:27.611757] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:19:18.870 [2024-07-23 02:14:27.614849] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:19:18.870 [2024-07-23 02:14:27.617732] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:19:18.870 [2024-07-23 02:14:27.621253] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:19:18.870 [2024-07-23 02:14:27.623706] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:19:18.870 [2024-07-23 02:14:27.626210] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:19:18.870 [2024-07-23 02:14:27.631151] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:19:18.870 00:19:18.870 job0: (groupid=0, jobs=1): err= 0: pid=78073: Tue Jul 23 02:14:27 2024 00:19:18.870 read: IOPS=344, BW=43.0MiB/s (45.1MB/s)(45.1MiB/1049msec) 00:19:18.870 slat (usec): min=5, max=317, avg=21.68, stdev=27.94 00:19:18.870 clat (usec): min=2371, max=58452, avg=11379.27, stdev=4875.69 00:19:18.870 lat (usec): min=2389, max=58462, avg=11400.95, stdev=4874.97 00:19:18.870 clat percentiles (usec): 00:19:18.870 | 1.00th=[ 4621], 5.00th=[ 9503], 10.00th=[10028], 20.00th=[10290], 00:19:18.870 | 30.00th=[10552], 40.00th=[10683], 50.00th=[10945], 60.00th=[11076], 00:19:18.870 | 70.00th=[11338], 80.00th=[11731], 90.00th=[12125], 95.00th=[12780], 00:19:18.870 | 99.00th=[51119], 99.50th=[55837], 99.90th=[58459], 99.95th=[58459], 00:19:18.870 | 99.99th=[58459] 00:19:18.870 bw ( KiB/s): min=41216, max=50276, per=6.22%, avg=45746.00, stdev=6406.39, samples=2 00:19:18.870 iops : min= 322, max= 392, avg=357.00, stdev=49.50, samples=2 00:19:18.870 write: IOPS=373, BW=46.7MiB/s (49.0MB/s)(49.0MiB/1049msec); 0 zone resets 00:19:18.870 slat (usec): min=7, max=306, avg=27.46, stdev=31.51 00:19:18.870 clat (msec): min=8, max=122, avg=74.94, stdev=11.23 00:19:18.870 lat (msec): min=8, max=122, avg=74.96, stdev=11.24 00:19:18.870 clat percentiles (msec): 00:19:18.870 | 1.00th=[ 21], 5.00th=[ 59], 10.00th=[ 69], 20.00th=[ 72], 00:19:18.870 | 30.00th=[ 74], 40.00th=[ 75], 50.00th=[ 77], 60.00th=[ 78], 00:19:18.870 | 70.00th=[ 79], 80.00th=[ 81], 90.00th=[ 82], 95.00th=[ 85], 00:19:18.870 | 99.00th=[ 110], 99.50th=[ 115], 99.90th=[ 124], 99.95th=[ 124], 00:19:18.870 | 99.99th=[ 124] 00:19:18.870 bw ( KiB/s): min=45659, max=47872, per=6.24%, avg=46765.50, stdev=1564.83, samples=2 00:19:18.870 iops : min= 356, max= 374, avg=365.00, stdev=12.73, samples=2 00:19:18.870 lat (msec) : 4=0.27%, 10=4.52%, 20=43.03%, 50=1.33%, 100=49.93% 00:19:18.870 lat (msec) : 250=0.93% 00:19:18.870 cpu : usr=0.38%, sys=1.34%, ctx=711, majf=0, minf=1 00:19:18.871 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.1%, 16=2.1%, 32=95.9%, >=64=0.0% 00:19:18.871 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:18.871 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0% 00:19:18.871 issued rwts: total=361,392,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:18.871 latency : target=0, window=0, percentile=100.00%, depth=32 00:19:18.871 job1: (groupid=0, jobs=1): err= 0: pid=78074: Tue Jul 23 02:14:27 2024 00:19:18.871 read: IOPS=313, BW=39.2MiB/s (41.2MB/s)(41.2MiB/1051msec) 00:19:18.871 slat (usec): min=6, max=444, avg=20.37, stdev=36.81 00:19:18.871 clat (usec): min=3295, max=62283, avg=12452.28, stdev=5870.37 00:19:18.871 lat (usec): min=3314, max=62308, avg=12472.65, stdev=5870.27 00:19:18.871 clat percentiles (usec): 00:19:18.871 | 1.00th=[ 9634], 5.00th=[10290], 10.00th=[10683], 20.00th=[10945], 00:19:18.871 | 30.00th=[11076], 40.00th=[11338], 50.00th=[11469], 60.00th=[11731], 00:19:18.871 | 70.00th=[12125], 80.00th=[12518], 90.00th=[12911], 95.00th=[14353], 00:19:18.871 | 99.00th=[56361], 99.50th=[59507], 99.90th=[62129], 99.95th=[62129], 00:19:18.871 | 99.99th=[62129] 00:19:18.871 bw ( KiB/s): min=39168, max=44032, per=5.66%, avg=41600.00, stdev=3439.37, samples=2 00:19:18.871 iops : min= 306, max= 344, avg=325.00, stdev=26.87, samples=2 00:19:18.871 write: IOPS=351, BW=43.9MiB/s (46.0MB/s)(46.1MiB/1051msec); 0 zone resets 00:19:18.871 slat (usec): min=7, max=545, avg=32.84, stdev=49.58 00:19:18.871 clat (msec): min=16, max=128, avg=79.78, stdev=11.35 00:19:18.871 lat (msec): min=16, max=128, avg=79.81, stdev=11.35 00:19:18.871 clat percentiles (msec): 00:19:18.871 | 1.00th=[ 30], 5.00th=[ 67], 10.00th=[ 72], 20.00th=[ 77], 00:19:18.871 | 30.00th=[ 79], 40.00th=[ 80], 50.00th=[ 82], 60.00th=[ 83], 00:19:18.871 | 70.00th=[ 84], 80.00th=[ 86], 90.00th=[ 88], 95.00th=[ 89], 00:19:18.871 | 99.00th=[ 116], 99.50th=[ 122], 99.90th=[ 129], 99.95th=[ 129], 00:19:18.871 | 99.99th=[ 129] 00:19:18.871 bw ( KiB/s): min=41984, max=45824, per=5.86%, avg=43904.00, stdev=2715.29, samples=2 00:19:18.871 iops : min= 328, max= 358, avg=343.00, stdev=21.21, samples=2 00:19:18.871 lat (msec) : 4=0.14%, 10=1.00%, 20=45.49%, 50=1.14%, 100=50.64% 00:19:18.871 lat (msec) : 250=1.57% 00:19:18.871 cpu : usr=0.67%, sys=0.95%, ctx=659, majf=0, minf=1 00:19:18.871 IO depths : 1=0.1%, 2=0.3%, 4=0.6%, 8=1.1%, 16=2.3%, 32=95.6%, >=64=0.0% 00:19:18.871 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:18.871 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0% 00:19:18.871 issued rwts: total=330,369,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:18.871 latency : target=0, window=0, percentile=100.00%, depth=32 00:19:18.871 job2: (groupid=0, jobs=1): err= 0: pid=78075: Tue Jul 23 02:14:27 2024 00:19:18.871 read: IOPS=383, BW=47.9MiB/s (50.2MB/s)(50.5MiB/1054msec) 00:19:18.871 slat (usec): min=7, max=283, avg=23.12, stdev=17.48 00:19:18.871 clat (usec): min=3653, max=62668, avg=11349.89, stdev=4168.00 00:19:18.871 lat (usec): min=3663, max=62690, avg=11373.01, stdev=4167.47 00:19:18.871 clat percentiles (usec): 00:19:18.871 | 1.00th=[ 5538], 5.00th=[ 9896], 10.00th=[10028], 20.00th=[10421], 00:19:18.871 | 30.00th=[10552], 40.00th=[10814], 50.00th=[10945], 60.00th=[11207], 00:19:18.871 | 70.00th=[11338], 80.00th=[11600], 90.00th=[11863], 95.00th=[12387], 00:19:18.871 | 99.00th=[16581], 99.50th=[54264], 99.90th=[62653], 99.95th=[62653], 00:19:18.871 | 99.99th=[62653] 00:19:18.871 bw ( KiB/s): min=50432, max=52224, per=6.98%, avg=51328.00, stdev=1267.14, samples=2 00:19:18.871 iops : min= 394, max= 408, avg=401.00, stdev= 9.90, samples=2 00:19:18.871 write: IOPS=371, BW=46.5MiB/s (48.7MB/s)(49.0MiB/1054msec); 0 zone resets 00:19:18.871 slat (usec): min=12, max=241, avg=30.40, stdev=18.92 00:19:18.871 clat (msec): min=16, max=123, avg=74.05, stdev=11.43 00:19:18.871 lat (msec): min=16, max=123, avg=74.08, stdev=11.44 00:19:18.871 clat percentiles (msec): 00:19:18.871 | 1.00th=[ 24], 5.00th=[ 61], 10.00th=[ 66], 20.00th=[ 70], 00:19:18.871 | 30.00th=[ 72], 40.00th=[ 73], 50.00th=[ 74], 60.00th=[ 77], 00:19:18.871 | 70.00th=[ 79], 80.00th=[ 81], 90.00th=[ 83], 95.00th=[ 86], 00:19:18.871 | 99.00th=[ 113], 99.50th=[ 122], 99.90th=[ 124], 99.95th=[ 124], 00:19:18.871 | 99.99th=[ 124] 00:19:18.871 bw ( KiB/s): min=45824, max=47360, per=6.22%, avg=46592.00, stdev=1086.12, samples=2 00:19:18.871 iops : min= 358, max= 370, avg=364.00, stdev= 8.49, samples=2 00:19:18.871 lat (msec) : 4=0.25%, 10=3.77%, 20=46.61%, 50=1.26%, 100=46.73% 00:19:18.871 lat (msec) : 250=1.38% 00:19:18.871 cpu : usr=0.85%, sys=1.80%, ctx=655, majf=0, minf=1 00:19:18.871 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.0%, 16=2.0%, 32=96.1%, >=64=0.0% 00:19:18.871 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:18.871 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0% 00:19:18.871 issued rwts: total=404,392,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:18.871 latency : target=0, window=0, percentile=100.00%, depth=32 00:19:18.871 job3: (groupid=0, jobs=1): err= 0: pid=78076: Tue Jul 23 02:14:27 2024 00:19:18.871 read: IOPS=361, BW=45.2MiB/s (47.4MB/s)(47.4MiB/1049msec) 00:19:18.871 slat (usec): min=7, max=281, avg=21.08, stdev=28.22 00:19:18.871 clat (usec): min=2242, max=56319, avg=11473.99, stdev=5357.41 00:19:18.871 lat (usec): min=2253, max=56341, avg=11495.06, stdev=5356.55 00:19:18.871 clat percentiles (usec): 00:19:18.871 | 1.00th=[ 6259], 5.00th=[ 9372], 10.00th=[ 9896], 20.00th=[10159], 00:19:18.871 | 30.00th=[10290], 40.00th=[10552], 50.00th=[10683], 60.00th=[10945], 00:19:18.871 | 70.00th=[11207], 80.00th=[11469], 90.00th=[11863], 95.00th=[13829], 00:19:18.871 | 99.00th=[53740], 99.50th=[56361], 99.90th=[56361], 99.95th=[56361], 00:19:18.871 | 99.99th=[56361] 00:19:18.871 bw ( KiB/s): min=44032, max=51815, per=6.52%, avg=47923.50, stdev=5503.41, samples=2 00:19:18.871 iops : min= 344, max= 404, avg=374.00, stdev=42.43, samples=2 00:19:18.871 write: IOPS=375, BW=46.9MiB/s (49.2MB/s)(49.2MiB/1049msec); 0 zone resets 00:19:18.871 slat (usec): min=9, max=400, avg=25.24, stdev=33.17 00:19:18.871 clat (msec): min=14, max=114, avg=73.91, stdev= 9.78 00:19:18.871 lat (msec): min=14, max=114, avg=73.94, stdev= 9.78 00:19:18.871 clat percentiles (msec): 00:19:18.871 | 1.00th=[ 26], 5.00th=[ 64], 10.00th=[ 68], 20.00th=[ 71], 00:19:18.871 | 30.00th=[ 72], 40.00th=[ 73], 50.00th=[ 74], 60.00th=[ 77], 00:19:18.871 | 70.00th=[ 78], 80.00th=[ 80], 90.00th=[ 82], 95.00th=[ 84], 00:19:18.871 | 99.00th=[ 105], 99.50th=[ 107], 99.90th=[ 115], 99.95th=[ 115], 00:19:18.871 | 99.99th=[ 115] 00:19:18.871 bw ( KiB/s): min=45402, max=48640, per=6.27%, avg=47021.00, stdev=2289.61, samples=2 00:19:18.871 iops : min= 354, max= 380, avg=367.00, stdev=18.38, samples=2 00:19:18.871 lat (msec) : 4=0.26%, 10=6.60%, 20=41.27%, 50=1.81%, 100=49.42% 00:19:18.871 lat (msec) : 250=0.65% 00:19:18.871 cpu : usr=0.38%, sys=1.43%, ctx=716, majf=0, minf=1 00:19:18.871 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.0%, 16=2.1%, 32=96.0%, >=64=0.0% 00:19:18.871 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:18.871 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0% 00:19:18.871 issued rwts: total=379,394,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:18.871 latency : target=0, window=0, percentile=100.00%, depth=32 00:19:18.871 job4: (groupid=0, jobs=1): err= 0: pid=78077: Tue Jul 23 02:14:27 2024 00:19:18.871 read: IOPS=362, BW=45.3MiB/s (47.5MB/s)(47.6MiB/1052msec) 00:19:18.871 slat (usec): min=6, max=198, avg=18.03, stdev=17.19 00:19:18.871 clat (usec): min=1373, max=60385, avg=11538.88, stdev=5524.39 00:19:18.871 lat (usec): min=1382, max=60404, avg=11556.91, stdev=5524.05 00:19:18.871 clat percentiles (usec): 00:19:18.871 | 1.00th=[ 1713], 5.00th=[ 9503], 10.00th=[10028], 20.00th=[10421], 00:19:18.871 | 30.00th=[10552], 40.00th=[10683], 50.00th=[10945], 60.00th=[11207], 00:19:18.871 | 70.00th=[11338], 80.00th=[11600], 90.00th=[12256], 95.00th=[14222], 00:19:18.871 | 99.00th=[52691], 99.50th=[60556], 99.90th=[60556], 99.95th=[60556], 00:19:18.871 | 99.99th=[60556] 00:19:18.871 bw ( KiB/s): min=43776, max=52585, per=6.56%, avg=48180.50, stdev=6228.90, samples=2 00:19:18.871 iops : min= 342, max= 410, avg=376.00, stdev=48.08, samples=2 00:19:18.871 write: IOPS=371, BW=46.5MiB/s (48.7MB/s)(48.9MiB/1052msec); 0 zone resets 00:19:18.871 slat (usec): min=7, max=332, avg=25.87, stdev=31.78 00:19:18.871 clat (msec): min=6, max=122, avg=74.62, stdev=12.02 00:19:18.871 lat (msec): min=6, max=122, avg=74.65, stdev=12.02 00:19:18.871 clat percentiles (msec): 00:19:18.871 | 1.00th=[ 18], 5.00th=[ 61], 10.00th=[ 67], 20.00th=[ 71], 00:19:18.871 | 30.00th=[ 73], 40.00th=[ 75], 50.00th=[ 75], 60.00th=[ 78], 00:19:18.871 | 70.00th=[ 79], 80.00th=[ 81], 90.00th=[ 83], 95.00th=[ 86], 00:19:18.871 | 99.00th=[ 117], 99.50th=[ 117], 99.90th=[ 123], 99.95th=[ 123], 00:19:18.871 | 99.99th=[ 123] 00:19:18.871 bw ( KiB/s): min=45659, max=47616, per=6.22%, avg=46637.50, stdev=1383.81, samples=2 00:19:18.871 iops : min= 356, max= 372, avg=364.00, stdev=11.31, samples=2 00:19:18.871 lat (msec) : 2=0.65%, 10=3.89%, 20=44.69%, 50=1.17%, 100=48.45% 00:19:18.871 lat (msec) : 250=1.17% 00:19:18.871 cpu : usr=0.19%, sys=1.33%, ctx=710, majf=0, minf=1 00:19:18.871 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.0%, 16=2.1%, 32=96.0%, >=64=0.0% 00:19:18.871 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:18.871 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0% 00:19:18.871 issued rwts: total=381,391,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:18.871 latency : target=0, window=0, percentile=100.00%, depth=32 00:19:18.871 job5: (groupid=0, jobs=1): err= 0: pid=78078: Tue Jul 23 02:14:27 2024 00:19:18.871 read: IOPS=360, BW=45.1MiB/s (47.3MB/s)(47.6MiB/1056msec) 00:19:18.871 slat (usec): min=5, max=490, avg=20.53, stdev=34.67 00:19:18.871 clat (usec): min=2804, max=62343, avg=11963.76, stdev=4214.20 00:19:18.871 lat (usec): min=2824, max=62363, avg=11984.28, stdev=4212.81 00:19:18.871 clat percentiles (usec): 00:19:18.871 | 1.00th=[ 3654], 5.00th=[10290], 10.00th=[10552], 20.00th=[10814], 00:19:18.871 | 30.00th=[11076], 40.00th=[11338], 50.00th=[11600], 60.00th=[11731], 00:19:18.871 | 70.00th=[11994], 80.00th=[12387], 90.00th=[12780], 95.00th=[13042], 00:19:18.871 | 99.00th=[23987], 99.50th=[62129], 99.90th=[62129], 99.95th=[62129], 00:19:18.871 | 99.99th=[62129] 00:19:18.871 bw ( KiB/s): min=43008, max=54016, per=6.60%, avg=48512.00, stdev=7783.83, samples=2 00:19:18.871 iops : min= 336, max= 422, avg=379.00, stdev=60.81, samples=2 00:19:18.872 write: IOPS=353, BW=44.2MiB/s (46.3MB/s)(46.6MiB/1056msec); 0 zone resets 00:19:18.872 slat (usec): min=8, max=254, avg=28.45, stdev=28.92 00:19:18.872 clat (msec): min=12, max=126, avg=78.04, stdev=12.25 00:19:18.872 lat (msec): min=12, max=126, avg=78.07, stdev=12.25 00:19:18.872 clat percentiles (msec): 00:19:18.872 | 1.00th=[ 24], 5.00th=[ 61], 10.00th=[ 70], 20.00th=[ 74], 00:19:18.872 | 30.00th=[ 75], 40.00th=[ 78], 50.00th=[ 79], 60.00th=[ 81], 00:19:18.872 | 70.00th=[ 83], 80.00th=[ 84], 90.00th=[ 86], 95.00th=[ 89], 00:19:18.872 | 99.00th=[ 124], 99.50th=[ 125], 99.90th=[ 127], 99.95th=[ 127], 00:19:18.872 | 99.99th=[ 127] 00:19:18.872 bw ( KiB/s): min=42240, max=45568, per=5.86%, avg=43904.00, stdev=2353.25, samples=2 00:19:18.872 iops : min= 330, max= 356, avg=343.00, stdev=18.38, samples=2 00:19:18.872 lat (msec) : 4=0.53%, 10=1.19%, 20=48.01%, 50=1.86%, 100=46.68% 00:19:18.872 lat (msec) : 250=1.72% 00:19:18.872 cpu : usr=0.57%, sys=1.23%, ctx=717, majf=0, minf=1 00:19:18.872 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.1%, 16=2.1%, 32=95.9%, >=64=0.0% 00:19:18.872 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:18.872 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0% 00:19:18.872 issued rwts: total=381,373,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:18.872 latency : target=0, window=0, percentile=100.00%, depth=32 00:19:18.872 job6: (groupid=0, jobs=1): err= 0: pid=78097: Tue Jul 23 02:14:27 2024 00:19:18.872 read: IOPS=370, BW=46.4MiB/s (48.6MB/s)(48.6MiB/1049msec) 00:19:18.872 slat (usec): min=7, max=495, avg=25.03, stdev=42.19 00:19:18.872 clat (usec): min=2797, max=57154, avg=11656.05, stdev=5520.33 00:19:18.872 lat (usec): min=2810, max=57171, avg=11681.08, stdev=5518.65 00:19:18.872 clat percentiles (usec): 00:19:18.872 | 1.00th=[ 8979], 5.00th=[ 9896], 10.00th=[10028], 20.00th=[10421], 00:19:18.872 | 30.00th=[10552], 40.00th=[10814], 50.00th=[10945], 60.00th=[11207], 00:19:18.872 | 70.00th=[11338], 80.00th=[11731], 90.00th=[11994], 95.00th=[12387], 00:19:18.872 | 99.00th=[54264], 99.50th=[56886], 99.90th=[57410], 99.95th=[57410], 00:19:18.872 | 99.99th=[57410] 00:19:18.872 bw ( KiB/s): min=46592, max=51559, per=6.68%, avg=49075.50, stdev=3512.20, samples=2 00:19:18.872 iops : min= 364, max= 402, avg=383.00, stdev=26.87, samples=2 00:19:18.872 write: IOPS=371, BW=46.5MiB/s (48.7MB/s)(48.8MiB/1049msec); 0 zone resets 00:19:18.872 slat (usec): min=11, max=674, avg=36.03, stdev=54.30 00:19:18.872 clat (msec): min=13, max=115, avg=74.21, stdev=11.30 00:19:18.872 lat (msec): min=13, max=115, avg=74.24, stdev=11.30 00:19:18.872 clat percentiles (msec): 00:19:18.872 | 1.00th=[ 21], 5.00th=[ 57], 10.00th=[ 65], 20.00th=[ 70], 00:19:18.872 | 30.00th=[ 72], 40.00th=[ 74], 50.00th=[ 77], 60.00th=[ 78], 00:19:18.872 | 70.00th=[ 79], 80.00th=[ 80], 90.00th=[ 84], 95.00th=[ 86], 00:19:18.872 | 99.00th=[ 108], 99.50th=[ 113], 99.90th=[ 116], 99.95th=[ 116], 00:19:18.872 | 99.99th=[ 116] 00:19:18.872 bw ( KiB/s): min=45915, max=47360, per=6.22%, avg=46637.50, stdev=1021.77, samples=2 00:19:18.872 iops : min= 358, max= 370, avg=364.00, stdev= 8.49, samples=2 00:19:18.872 lat (msec) : 4=0.26%, 10=4.11%, 20=45.19%, 50=1.54%, 100=48.14% 00:19:18.872 lat (msec) : 250=0.77% 00:19:18.872 cpu : usr=0.67%, sys=1.43%, ctx=699, majf=0, minf=1 00:19:18.872 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.0%, 16=2.1%, 32=96.0%, >=64=0.0% 00:19:18.872 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:18.872 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0% 00:19:18.872 issued rwts: total=389,390,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:18.872 latency : target=0, window=0, percentile=100.00%, depth=32 00:19:18.872 job7: (groupid=0, jobs=1): err= 0: pid=78113: Tue Jul 23 02:14:27 2024 00:19:18.872 read: IOPS=351, BW=43.9MiB/s (46.0MB/s)(46.0MiB/1048msec) 00:19:18.872 slat (usec): min=7, max=353, avg=22.52, stdev=28.71 00:19:18.872 clat (usec): min=3682, max=56802, avg=11542.41, stdev=5138.20 00:19:18.872 lat (usec): min=3690, max=56851, avg=11564.93, stdev=5137.92 00:19:18.872 clat percentiles (usec): 00:19:18.872 | 1.00th=[ 6980], 5.00th=[ 9634], 10.00th=[ 9765], 20.00th=[10159], 00:19:18.872 | 30.00th=[10421], 40.00th=[10683], 50.00th=[10814], 60.00th=[10945], 00:19:18.872 | 70.00th=[11207], 80.00th=[11469], 90.00th=[11994], 95.00th=[13960], 00:19:18.872 | 99.00th=[49546], 99.50th=[56886], 99.90th=[56886], 99.95th=[56886], 00:19:18.872 | 99.99th=[56886] 00:19:18.872 bw ( KiB/s): min=42752, max=50276, per=6.33%, avg=46514.00, stdev=5320.27, samples=2 00:19:18.872 iops : min= 334, max= 392, avg=363.00, stdev=41.01, samples=2 00:19:18.872 write: IOPS=375, BW=47.0MiB/s (49.3MB/s)(49.2MiB/1048msec); 0 zone resets 00:19:18.872 slat (usec): min=9, max=408, avg=25.51, stdev=25.34 00:19:18.872 clat (msec): min=15, max=110, avg=74.10, stdev= 9.43 00:19:18.872 lat (msec): min=15, max=110, avg=74.13, stdev= 9.43 00:19:18.872 clat percentiles (msec): 00:19:18.872 | 1.00th=[ 28], 5.00th=[ 64], 10.00th=[ 68], 20.00th=[ 71], 00:19:18.872 | 30.00th=[ 73], 40.00th=[ 74], 50.00th=[ 75], 60.00th=[ 77], 00:19:18.872 | 70.00th=[ 78], 80.00th=[ 79], 90.00th=[ 81], 95.00th=[ 83], 00:19:18.872 | 99.00th=[ 106], 99.50th=[ 110], 99.90th=[ 111], 99.95th=[ 111], 00:19:18.872 | 99.99th=[ 111] 00:19:18.872 bw ( KiB/s): min=45402, max=48896, per=6.29%, avg=47149.00, stdev=2470.63, samples=2 00:19:18.872 iops : min= 354, max= 382, avg=368.00, stdev=19.80, samples=2 00:19:18.872 lat (msec) : 4=0.26%, 10=6.56%, 20=40.81%, 50=1.57%, 100=50.13% 00:19:18.872 lat (msec) : 250=0.66% 00:19:18.872 cpu : usr=0.57%, sys=1.34%, ctx=691, majf=0, minf=1 00:19:18.872 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.0%, 16=2.1%, 32=95.9%, >=64=0.0% 00:19:18.872 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:18.872 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0% 00:19:18.872 issued rwts: total=368,394,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:18.872 latency : target=0, window=0, percentile=100.00%, depth=32 00:19:18.872 job8: (groupid=0, jobs=1): err= 0: pid=78158: Tue Jul 23 02:14:27 2024 00:19:18.872 read: IOPS=333, BW=41.7MiB/s (43.7MB/s)(44.1MiB/1059msec) 00:19:18.872 slat (usec): min=7, max=782, avg=25.48, stdev=50.65 00:19:18.872 clat (usec): min=478, max=65545, avg=11000.05, stdev=5329.75 00:19:18.872 lat (usec): min=489, max=65569, avg=11025.53, stdev=5330.73 00:19:18.872 clat percentiles (usec): 00:19:18.872 | 1.00th=[ 1319], 5.00th=[ 5932], 10.00th=[ 9372], 20.00th=[10290], 00:19:18.872 | 30.00th=[10421], 40.00th=[10683], 50.00th=[10814], 60.00th=[10945], 00:19:18.872 | 70.00th=[11207], 80.00th=[11600], 90.00th=[11994], 95.00th=[14222], 00:19:18.872 | 99.00th=[16909], 99.50th=[63177], 99.90th=[65799], 99.95th=[65799], 00:19:18.872 | 99.99th=[65799] 00:19:18.872 bw ( KiB/s): min=41216, max=48224, per=6.09%, avg=44720.00, stdev=4955.40, samples=2 00:19:18.872 iops : min= 322, max= 376, avg=349.00, stdev=38.18, samples=2 00:19:18.872 write: IOPS=377, BW=47.2MiB/s (49.5MB/s)(50.0MiB/1059msec); 0 zone resets 00:19:18.872 slat (usec): min=7, max=632, avg=26.34, stdev=37.67 00:19:18.872 clat (msec): min=2, max=137, avg=74.73, stdev=16.76 00:19:18.872 lat (msec): min=2, max=137, avg=74.75, stdev=16.77 00:19:18.872 clat percentiles (msec): 00:19:18.872 | 1.00th=[ 4], 5.00th=[ 43], 10.00th=[ 68], 20.00th=[ 72], 00:19:18.872 | 30.00th=[ 74], 40.00th=[ 75], 50.00th=[ 77], 60.00th=[ 79], 00:19:18.872 | 70.00th=[ 81], 80.00th=[ 83], 90.00th=[ 85], 95.00th=[ 87], 00:19:18.872 | 99.00th=[ 123], 99.50th=[ 128], 99.90th=[ 138], 99.95th=[ 138], 00:19:18.872 | 99.99th=[ 138] 00:19:18.872 bw ( KiB/s): min=47454, max=47616, per=6.34%, avg=47535.00, stdev=114.55, samples=2 00:19:18.872 iops : min= 370, max= 372, avg=371.00, stdev= 1.41, samples=2 00:19:18.872 lat (usec) : 500=0.13%, 750=0.13% 00:19:18.872 lat (msec) : 2=0.66%, 4=1.06%, 10=6.64%, 20=39.58%, 50=1.20% 00:19:18.872 lat (msec) : 100=48.87%, 250=1.73% 00:19:18.872 cpu : usr=0.47%, sys=1.42%, ctx=681, majf=0, minf=1 00:19:18.872 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.1%, 16=2.1%, 32=95.9%, >=64=0.0% 00:19:18.872 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:18.872 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0% 00:19:18.872 issued rwts: total=353,400,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:18.872 latency : target=0, window=0, percentile=100.00%, depth=32 00:19:18.872 job9: (groupid=0, jobs=1): err= 0: pid=78181: Tue Jul 23 02:14:27 2024 00:19:18.872 read: IOPS=375, BW=47.0MiB/s (49.3MB/s)(49.6MiB/1056msec) 00:19:18.872 slat (usec): min=6, max=733, avg=20.60, stdev=38.54 00:19:18.872 clat (usec): min=1051, max=64462, avg=12336.98, stdev=5578.40 00:19:18.872 lat (usec): min=1061, max=64485, avg=12357.59, stdev=5579.14 00:19:18.872 clat percentiles (usec): 00:19:18.872 | 1.00th=[ 1811], 5.00th=[10159], 10.00th=[10552], 20.00th=[10945], 00:19:18.872 | 30.00th=[11207], 40.00th=[11338], 50.00th=[11600], 60.00th=[11863], 00:19:18.872 | 70.00th=[12387], 80.00th=[12649], 90.00th=[13435], 95.00th=[16581], 00:19:18.872 | 99.00th=[61604], 99.50th=[64226], 99.90th=[64226], 99.95th=[64226], 00:19:18.872 | 99.99th=[64226] 00:19:18.872 bw ( KiB/s): min=43264, max=57344, per=6.84%, avg=50304.00, stdev=9956.06, samples=2 00:19:18.872 iops : min= 338, max= 448, avg=393.00, stdev=77.78, samples=2 00:19:18.872 write: IOPS=352, BW=44.0MiB/s (46.2MB/s)(46.5MiB/1056msec); 0 zone resets 00:19:18.872 slat (usec): min=9, max=334, avg=27.15, stdev=27.12 00:19:18.872 clat (msec): min=6, max=133, avg=77.41, stdev=12.89 00:19:18.872 lat (msec): min=6, max=133, avg=77.44, stdev=12.90 00:19:18.872 clat percentiles (msec): 00:19:18.872 | 1.00th=[ 22], 5.00th=[ 59], 10.00th=[ 68], 20.00th=[ 73], 00:19:18.872 | 30.00th=[ 74], 40.00th=[ 77], 50.00th=[ 79], 60.00th=[ 81], 00:19:18.872 | 70.00th=[ 83], 80.00th=[ 85], 90.00th=[ 87], 95.00th=[ 91], 00:19:18.872 | 99.00th=[ 124], 99.50th=[ 127], 99.90th=[ 134], 99.95th=[ 134], 00:19:18.872 | 99.99th=[ 134] 00:19:18.872 bw ( KiB/s): min=42496, max=45568, per=5.88%, avg=44032.00, stdev=2172.23, samples=2 00:19:18.872 iops : min= 332, max= 356, avg=344.00, stdev=16.97, samples=2 00:19:18.872 lat (msec) : 2=0.52%, 10=1.95%, 20=48.76%, 50=1.30%, 100=46.16% 00:19:18.872 lat (msec) : 250=1.30% 00:19:18.872 cpu : usr=0.47%, sys=1.52%, ctx=704, majf=0, minf=1 00:19:18.872 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.0%, 16=2.1%, 32=96.0%, >=64=0.0% 00:19:18.872 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:18.873 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0% 00:19:18.873 issued rwts: total=397,372,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:18.873 latency : target=0, window=0, percentile=100.00%, depth=32 00:19:18.873 job10: (groupid=0, jobs=1): err= 0: pid=78182: Tue Jul 23 02:14:27 2024 00:19:18.873 read: IOPS=393, BW=49.2MiB/s (51.6MB/s)(51.8MiB/1052msec) 00:19:18.873 slat (usec): min=5, max=462, avg=19.43, stdev=27.09 00:19:18.873 clat (usec): min=5397, max=54148, avg=11380.83, stdev=3788.11 00:19:18.873 lat (usec): min=5404, max=54163, avg=11400.26, stdev=3788.09 00:19:18.873 clat percentiles (usec): 00:19:18.873 | 1.00th=[ 9110], 5.00th=[ 9896], 10.00th=[10028], 20.00th=[10421], 00:19:18.873 | 30.00th=[10552], 40.00th=[10814], 50.00th=[10945], 60.00th=[11076], 00:19:18.873 | 70.00th=[11338], 80.00th=[11600], 90.00th=[12125], 95.00th=[12911], 00:19:18.873 | 99.00th=[18744], 99.50th=[52167], 99.90th=[54264], 99.95th=[54264], 00:19:18.873 | 99.99th=[54264] 00:19:18.873 bw ( KiB/s): min=49152, max=56064, per=7.16%, avg=52608.00, stdev=4887.52, samples=2 00:19:18.873 iops : min= 384, max= 438, avg=411.00, stdev=38.18, samples=2 00:19:18.873 write: IOPS=373, BW=46.7MiB/s (49.0MB/s)(49.1MiB/1052msec); 0 zone resets 00:19:18.873 slat (usec): min=6, max=521, avg=28.84, stdev=35.93 00:19:18.873 clat (msec): min=14, max=118, avg=73.40, stdev=11.30 00:19:18.873 lat (msec): min=14, max=118, avg=73.43, stdev=11.31 00:19:18.873 clat percentiles (msec): 00:19:18.873 | 1.00th=[ 24], 5.00th=[ 58], 10.00th=[ 65], 20.00th=[ 69], 00:19:18.873 | 30.00th=[ 71], 40.00th=[ 73], 50.00th=[ 74], 60.00th=[ 75], 00:19:18.873 | 70.00th=[ 78], 80.00th=[ 80], 90.00th=[ 82], 95.00th=[ 85], 00:19:18.873 | 99.00th=[ 113], 99.50th=[ 118], 99.90th=[ 118], 99.95th=[ 118], 00:19:18.873 | 99.99th=[ 118] 00:19:18.873 bw ( KiB/s): min=45824, max=47360, per=6.22%, avg=46592.00, stdev=1086.12, samples=2 00:19:18.873 iops : min= 358, max= 370, avg=364.00, stdev= 8.49, samples=2 00:19:18.873 lat (msec) : 10=4.58%, 20=46.72%, 50=1.24%, 100=46.34%, 250=1.12% 00:19:18.873 cpu : usr=0.57%, sys=1.52%, ctx=753, majf=0, minf=1 00:19:18.873 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=1.0%, 16=2.0%, 32=96.2%, >=64=0.0% 00:19:18.873 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:18.873 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0% 00:19:18.873 issued rwts: total=414,393,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:18.873 latency : target=0, window=0, percentile=100.00%, depth=32 00:19:18.873 job11: (groupid=0, jobs=1): err= 0: pid=78184: Tue Jul 23 02:14:27 2024 00:19:18.873 read: IOPS=349, BW=43.7MiB/s (45.8MB/s)(45.9MiB/1050msec) 00:19:18.873 slat (usec): min=6, max=474, avg=25.72, stdev=29.34 00:19:18.873 clat (usec): min=2087, max=56451, avg=11832.74, stdev=4623.81 00:19:18.873 lat (usec): min=2096, max=56464, avg=11858.46, stdev=4621.65 00:19:18.873 clat percentiles (usec): 00:19:18.873 | 1.00th=[ 8979], 5.00th=[ 9765], 10.00th=[ 9896], 20.00th=[10290], 00:19:18.873 | 30.00th=[10552], 40.00th=[10683], 50.00th=[10945], 60.00th=[11076], 00:19:18.873 | 70.00th=[11338], 80.00th=[11731], 90.00th=[13829], 95.00th=[18220], 00:19:18.873 | 99.00th=[22676], 99.50th=[54264], 99.90th=[56361], 99.95th=[56361], 00:19:18.873 | 99.99th=[56361] 00:19:18.873 bw ( KiB/s): min=40448, max=52736, per=6.34%, avg=46592.00, stdev=8688.93, samples=2 00:19:18.873 iops : min= 316, max= 412, avg=364.00, stdev=67.88, samples=2 00:19:18.873 write: IOPS=377, BW=47.1MiB/s (49.4MB/s)(49.5MiB/1050msec); 0 zone resets 00:19:18.873 slat (usec): min=7, max=469, avg=30.95, stdev=34.13 00:19:18.873 clat (msec): min=10, max=124, avg=73.63, stdev=11.22 00:19:18.873 lat (msec): min=10, max=124, avg=73.67, stdev=11.22 00:19:18.873 clat percentiles (msec): 00:19:18.873 | 1.00th=[ 24], 5.00th=[ 58], 10.00th=[ 66], 20.00th=[ 70], 00:19:18.873 | 30.00th=[ 72], 40.00th=[ 74], 50.00th=[ 75], 60.00th=[ 77], 00:19:18.873 | 70.00th=[ 78], 80.00th=[ 79], 90.00th=[ 82], 95.00th=[ 86], 00:19:18.873 | 99.00th=[ 115], 99.50th=[ 125], 99.90th=[ 125], 99.95th=[ 125], 00:19:18.873 | 99.99th=[ 125] 00:19:18.873 bw ( KiB/s): min=45312, max=48896, per=6.29%, avg=47104.00, stdev=2534.27, samples=2 00:19:18.873 iops : min= 354, max= 382, avg=368.00, stdev=19.80, samples=2 00:19:18.873 lat (msec) : 4=0.39%, 10=5.11%, 20=41.28%, 50=2.23%, 100=49.80% 00:19:18.873 lat (msec) : 250=1.18% 00:19:18.873 cpu : usr=0.86%, sys=1.72%, ctx=591, majf=0, minf=1 00:19:18.873 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.0%, 16=2.1%, 32=95.9%, >=64=0.0% 00:19:18.873 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:18.873 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0% 00:19:18.873 issued rwts: total=367,396,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:18.873 latency : target=0, window=0, percentile=100.00%, depth=32 00:19:18.873 job12: (groupid=0, jobs=1): err= 0: pid=78188: Tue Jul 23 02:14:27 2024 00:19:18.873 read: IOPS=418, BW=52.3MiB/s (54.8MB/s)(54.6MiB/1045msec) 00:19:18.873 slat (usec): min=6, max=356, avg=23.16, stdev=39.52 00:19:18.873 clat (usec): min=2684, max=52259, avg=11372.50, stdev=4300.16 00:19:18.873 lat (usec): min=2693, max=52284, avg=11395.66, stdev=4302.79 00:19:18.873 clat percentiles (usec): 00:19:18.873 | 1.00th=[ 3720], 5.00th=[ 9503], 10.00th=[ 9896], 20.00th=[10290], 00:19:18.873 | 30.00th=[10552], 40.00th=[10683], 50.00th=[10945], 60.00th=[11076], 00:19:18.873 | 70.00th=[11338], 80.00th=[11600], 90.00th=[11994], 95.00th=[14353], 00:19:18.873 | 99.00th=[45351], 99.50th=[45351], 99.90th=[52167], 99.95th=[52167], 00:19:18.873 | 99.99th=[52167] 00:19:18.873 bw ( KiB/s): min=54528, max=56064, per=7.52%, avg=55296.00, stdev=1086.12, samples=2 00:19:18.873 iops : min= 426, max= 438, avg=432.00, stdev= 8.49, samples=2 00:19:18.873 write: IOPS=374, BW=46.8MiB/s (49.0MB/s)(48.9MiB/1045msec); 0 zone resets 00:19:18.873 slat (usec): min=6, max=339, avg=21.82, stdev=28.46 00:19:18.873 clat (msec): min=10, max=115, avg=72.55, stdev=10.38 00:19:18.873 lat (msec): min=10, max=115, avg=72.58, stdev=10.38 00:19:18.873 clat percentiles (msec): 00:19:18.873 | 1.00th=[ 22], 5.00th=[ 61], 10.00th=[ 67], 20.00th=[ 70], 00:19:18.873 | 30.00th=[ 71], 40.00th=[ 72], 50.00th=[ 73], 60.00th=[ 75], 00:19:18.873 | 70.00th=[ 77], 80.00th=[ 78], 90.00th=[ 80], 95.00th=[ 82], 00:19:18.873 | 99.00th=[ 102], 99.50th=[ 112], 99.90th=[ 115], 99.95th=[ 115], 00:19:18.873 | 99.99th=[ 115] 00:19:18.873 bw ( KiB/s): min=45568, max=47872, per=6.23%, avg=46720.00, stdev=1629.17, samples=2 00:19:18.873 iops : min= 356, max= 374, avg=365.00, stdev=12.73, samples=2 00:19:18.873 lat (msec) : 4=0.60%, 10=5.31%, 20=46.62%, 50=1.57%, 100=45.17% 00:19:18.873 lat (msec) : 250=0.72% 00:19:18.873 cpu : usr=0.57%, sys=1.25%, ctx=799, majf=0, minf=1 00:19:18.873 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=1.0%, 16=1.9%, 32=96.3%, >=64=0.0% 00:19:18.873 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:18.873 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0% 00:19:18.873 issued rwts: total=437,391,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:18.873 latency : target=0, window=0, percentile=100.00%, depth=32 00:19:18.873 job13: (groupid=0, jobs=1): err= 0: pid=78189: Tue Jul 23 02:14:27 2024 00:19:18.873 read: IOPS=370, BW=46.3MiB/s (48.5MB/s)(48.8MiB/1054msec) 00:19:18.873 slat (usec): min=5, max=316, avg=18.60, stdev=19.49 00:19:18.873 clat (usec): min=5644, max=63213, avg=12379.37, stdev=5099.33 00:19:18.873 lat (usec): min=5664, max=63227, avg=12397.97, stdev=5098.46 00:19:18.873 clat percentiles (usec): 00:19:18.873 | 1.00th=[ 9110], 5.00th=[10421], 10.00th=[10683], 20.00th=[10945], 00:19:18.873 | 30.00th=[11207], 40.00th=[11469], 50.00th=[11600], 60.00th=[11863], 00:19:18.873 | 70.00th=[12125], 80.00th=[12518], 90.00th=[12911], 95.00th=[15664], 00:19:18.873 | 99.00th=[54264], 99.50th=[63177], 99.90th=[63177], 99.95th=[63177], 00:19:18.873 | 99.99th=[63177] 00:19:18.873 bw ( KiB/s): min=45312, max=53611, per=6.73%, avg=49461.50, stdev=5868.28, samples=2 00:19:18.873 iops : min= 354, max= 418, avg=386.00, stdev=45.25, samples=2 00:19:18.873 write: IOPS=351, BW=43.9MiB/s (46.0MB/s)(46.2MiB/1054msec); 0 zone resets 00:19:18.873 slat (usec): min=7, max=892, avg=32.12, stdev=65.01 00:19:18.873 clat (msec): min=17, max=129, avg=77.77, stdev=11.56 00:19:18.873 lat (msec): min=17, max=129, avg=77.80, stdev=11.55 00:19:18.873 clat percentiles (msec): 00:19:18.873 | 1.00th=[ 29], 5.00th=[ 61], 10.00th=[ 70], 20.00th=[ 73], 00:19:18.873 | 30.00th=[ 75], 40.00th=[ 78], 50.00th=[ 79], 60.00th=[ 81], 00:19:18.873 | 70.00th=[ 82], 80.00th=[ 83], 90.00th=[ 87], 95.00th=[ 91], 00:19:18.873 | 99.00th=[ 121], 99.50th=[ 126], 99.90th=[ 130], 99.95th=[ 130], 00:19:18.873 | 99.99th=[ 130] 00:19:18.873 bw ( KiB/s): min=41811, max=45824, per=5.85%, avg=43817.50, stdev=2837.62, samples=2 00:19:18.873 iops : min= 326, max= 358, avg=342.00, stdev=22.63, samples=2 00:19:18.873 lat (msec) : 10=1.18%, 20=49.61%, 50=1.18%, 100=46.71%, 250=1.32% 00:19:18.873 cpu : usr=0.47%, sys=1.42%, ctx=700, majf=0, minf=1 00:19:18.873 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.1%, 16=2.1%, 32=95.9%, >=64=0.0% 00:19:18.873 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:18.873 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0% 00:19:18.873 issued rwts: total=390,370,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:18.873 latency : target=0, window=0, percentile=100.00%, depth=32 00:19:18.873 job14: (groupid=0, jobs=1): err= 0: pid=78190: Tue Jul 23 02:14:27 2024 00:19:18.873 read: IOPS=352, BW=44.1MiB/s (46.3MB/s)(46.5MiB/1054msec) 00:19:18.873 slat (usec): min=5, max=242, avg=20.49, stdev=21.41 00:19:18.873 clat (usec): min=2146, max=60484, avg=11255.70, stdev=4381.74 00:19:18.873 lat (usec): min=2169, max=60495, avg=11276.19, stdev=4380.81 00:19:18.873 clat percentiles (usec): 00:19:18.873 | 1.00th=[ 4686], 5.00th=[ 9896], 10.00th=[10028], 20.00th=[10290], 00:19:18.873 | 30.00th=[10552], 40.00th=[10683], 50.00th=[10814], 60.00th=[11076], 00:19:18.873 | 70.00th=[11338], 80.00th=[11600], 90.00th=[11863], 95.00th=[12125], 00:19:18.873 | 99.00th=[20055], 99.50th=[55313], 99.90th=[60556], 99.95th=[60556], 00:19:18.873 | 99.99th=[60556] 00:19:18.873 bw ( KiB/s): min=46848, max=47711, per=6.43%, avg=47279.50, stdev=610.23, samples=2 00:19:18.874 iops : min= 366, max= 372, avg=369.00, stdev= 4.24, samples=2 00:19:18.874 write: IOPS=371, BW=46.5MiB/s (48.7MB/s)(49.0MiB/1054msec); 0 zone resets 00:19:18.874 slat (usec): min=7, max=482, avg=30.73, stdev=46.35 00:19:18.874 clat (msec): min=16, max=124, avg=75.13, stdev=11.55 00:19:18.874 lat (msec): min=16, max=124, avg=75.16, stdev=11.56 00:19:18.874 clat percentiles (msec): 00:19:18.874 | 1.00th=[ 23], 5.00th=[ 61], 10.00th=[ 69], 20.00th=[ 71], 00:19:18.874 | 30.00th=[ 73], 40.00th=[ 74], 50.00th=[ 75], 60.00th=[ 77], 00:19:18.874 | 70.00th=[ 79], 80.00th=[ 81], 90.00th=[ 84], 95.00th=[ 87], 00:19:18.874 | 99.00th=[ 120], 99.50th=[ 123], 99.90th=[ 125], 99.95th=[ 125], 00:19:18.874 | 99.99th=[ 125] 00:19:18.874 bw ( KiB/s): min=45659, max=47360, per=6.21%, avg=46509.50, stdev=1202.79, samples=2 00:19:18.874 iops : min= 356, max= 370, avg=363.00, stdev= 9.90, samples=2 00:19:18.874 lat (msec) : 4=0.39%, 10=4.19%, 20=43.85%, 50=1.44%, 100=48.69% 00:19:18.874 lat (msec) : 250=1.44% 00:19:18.874 cpu : usr=0.47%, sys=1.42%, ctx=699, majf=0, minf=1 00:19:18.874 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.0%, 16=2.1%, 32=95.9%, >=64=0.0% 00:19:18.874 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:18.874 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0% 00:19:18.874 issued rwts: total=372,392,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:18.874 latency : target=0, window=0, percentile=100.00%, depth=32 00:19:18.874 job15: (groupid=0, jobs=1): err= 0: pid=78191: Tue Jul 23 02:14:27 2024 00:19:18.874 read: IOPS=342, BW=42.8MiB/s (44.9MB/s)(45.4MiB/1060msec) 00:19:18.874 slat (usec): min=7, max=355, avg=25.20, stdev=23.24 00:19:18.874 clat (usec): min=3357, max=67023, avg=11349.05, stdev=4475.42 00:19:18.874 lat (usec): min=3389, max=67035, avg=11374.25, stdev=4474.10 00:19:18.874 clat percentiles (usec): 00:19:18.874 | 1.00th=[ 4752], 5.00th=[ 9503], 10.00th=[ 9765], 20.00th=[10159], 00:19:18.874 | 30.00th=[10421], 40.00th=[10683], 50.00th=[10814], 60.00th=[11076], 00:19:18.874 | 70.00th=[11338], 80.00th=[11600], 90.00th=[12256], 95.00th=[15533], 00:19:18.874 | 99.00th=[21103], 99.50th=[64226], 99.90th=[66847], 99.95th=[66847], 00:19:18.874 | 99.99th=[66847] 00:19:18.874 bw ( KiB/s): min=44544, max=47872, per=6.29%, avg=46208.00, stdev=2353.25, samples=2 00:19:18.874 iops : min= 348, max= 374, avg=361.00, stdev=18.38, samples=2 00:19:18.874 write: IOPS=374, BW=46.8MiB/s (49.1MB/s)(49.6MiB/1060msec); 0 zone resets 00:19:18.874 slat (usec): min=8, max=11279, avg=85.24, stdev=745.91 00:19:18.874 clat (usec): min=1807, max=131249, avg=73515.24, stdev=15512.93 00:19:18.874 lat (msec): min=13, max=131, avg=73.60, stdev=15.28 00:19:18.874 clat percentiles (msec): 00:19:18.874 | 1.00th=[ 14], 5.00th=[ 48], 10.00th=[ 66], 20.00th=[ 70], 00:19:18.874 | 30.00th=[ 72], 40.00th=[ 74], 50.00th=[ 75], 60.00th=[ 78], 00:19:18.874 | 70.00th=[ 78], 80.00th=[ 80], 90.00th=[ 83], 95.00th=[ 85], 00:19:18.874 | 99.00th=[ 129], 99.50th=[ 132], 99.90th=[ 132], 99.95th=[ 132], 00:19:18.874 | 99.99th=[ 132] 00:19:18.874 bw ( KiB/s): min=45568, max=48640, per=6.29%, avg=47104.00, stdev=2172.23, samples=2 00:19:18.874 iops : min= 356, max= 380, avg=368.00, stdev=16.97, samples=2 00:19:18.874 lat (msec) : 2=0.13%, 4=0.26%, 10=7.89%, 20=40.39%, 50=1.45% 00:19:18.874 lat (msec) : 100=48.03%, 250=1.84% 00:19:18.874 cpu : usr=0.76%, sys=1.61%, ctx=601, majf=0, minf=1 00:19:18.874 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.1%, 16=2.1%, 32=95.9%, >=64=0.0% 00:19:18.874 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:18.874 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0% 00:19:18.874 issued rwts: total=363,397,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:18.874 latency : target=0, window=0, percentile=100.00%, depth=32 00:19:18.874 00:19:18.874 Run status group 0 (all jobs): 00:19:18.874 READ: bw=718MiB/s (753MB/s), 39.2MiB/s-52.3MiB/s (41.2MB/s-54.8MB/s), io=761MiB (798MB), run=1045-1060msec 00:19:18.874 WRITE: bw=732MiB/s (767MB/s), 43.9MiB/s-47.2MiB/s (46.0MB/s-49.5MB/s), io=776MiB (813MB), run=1045-1060msec 00:19:18.874 00:19:18.874 Disk stats (read/write): 00:19:18.874 sda: ios=366/323, merge=0/0, ticks=3479/23915, in_queue=27394, util=73.88% 00:19:18.874 sdb: ios=346/302, merge=0/0, ticks=3519/23790, in_queue=27310, util=74.19% 00:19:18.874 sdc: ios=410/322, merge=0/0, ticks=4006/23570, in_queue=27577, util=75.10% 00:19:18.874 sdf: ios=379/322, merge=0/0, ticks=3640/23700, in_queue=27341, util=74.51% 00:19:18.874 sdd: ios=388/325, merge=0/0, ticks=3736/23793, in_queue=27529, util=76.03% 00:19:18.874 sde: ios=397/304, merge=0/0, ticks=4109/23360, in_queue=27470, util=76.41% 00:19:18.874 sdg: ios=383/322, merge=0/0, ticks=3692/23523, in_queue=27215, util=77.13% 00:19:18.874 sdh: ios=320/322, merge=0/0, ticks=3522/23613, in_queue=27135, util=76.75% 00:19:18.874 sdi: ios=321/336, merge=0/0, ticks=3368/24510, in_queue=27878, util=79.63% 00:19:18.874 sdj: ios=362/305, merge=0/0, ticks=4283/23187, in_queue=27470, util=83.38% 00:19:18.874 sdk: ios=382/322, merge=0/0, ticks=4225/23296, in_queue=27521, util=83.10% 00:19:18.874 sdl: ios=338/323, merge=0/0, ticks=3899/23374, in_queue=27274, util=84.17% 00:19:18.874 sdm: ios=391/320, merge=0/0, ticks=4263/22876, in_queue=27140, util=84.30% 00:19:18.874 sdn: ios=348/302, merge=0/0, ticks=4161/23222, in_queue=27384, util=86.21% 00:19:18.874 sdo: ios=335/323, merge=0/0, ticks=3633/23789, in_queue=27423, util=86.29% 00:19:18.874 sdp: ios=340/331, merge=0/0, ticks=3682/23394, in_queue=27077, util=89.03% 00:19:18.874 [2024-07-23 02:14:27.634807] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:19:18.874 [2024-07-23 02:14:27.638555] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:19:18.874 Cleaning up iSCSI connection 00:19:18.874 02:14:27 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@82 -- # iscsicleanup 00:19:18.874 02:14:27 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@980 -- # echo 'Cleaning up iSCSI connection' 00:19:18.874 02:14:27 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@981 -- # iscsiadm -m node --logout 00:19:19.439 Logging out of session [sid: 22, target: iqn.2016-06.io.spdk:Target0, portal: 10.0.0.1,3260] 00:19:19.439 Logging out of session [sid: 23, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] 00:19:19.439 Logging out of session [sid: 24, target: iqn.2016-06.io.spdk:Target2, portal: 10.0.0.1,3260] 00:19:19.440 Logging out of session [sid: 25, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] 00:19:19.440 Logging out of session [sid: 26, target: iqn.2016-06.io.spdk:Target4, portal: 10.0.0.1,3260] 00:19:19.440 Logging out of session [sid: 27, target: iqn.2016-06.io.spdk:Target5, portal: 10.0.0.1,3260] 00:19:19.440 Logging out of session [sid: 28, target: iqn.2016-06.io.spdk:Target6, portal: 10.0.0.1,3260] 00:19:19.440 Logging out of session [sid: 29, target: iqn.2016-06.io.spdk:Target7, portal: 10.0.0.1,3260] 00:19:19.440 Logging out of session [sid: 30, target: iqn.2016-06.io.spdk:Target8, portal: 10.0.0.1,3260] 00:19:19.440 Logging out of session [sid: 31, target: iqn.2016-06.io.spdk:Target9, portal: 10.0.0.1,3260] 00:19:19.440 Logging out of session [sid: 32, target: iqn.2016-06.io.spdk:Target10, portal: 10.0.0.1,3260] 00:19:19.440 Logging out of session [sid: 33, target: iqn.2016-06.io.spdk:Target11, portal: 10.0.0.1,3260] 00:19:19.440 Logging out of session [sid: 34, target: iqn.2016-06.io.spdk:Target12, portal: 10.0.0.1,3260] 00:19:19.440 Logging out of session [sid: 35, target: iqn.2016-06.io.spdk:Target13, portal: 10.0.0.1,3260] 00:19:19.440 Logging out of session [sid: 36, target: iqn.2016-06.io.spdk:Target14, portal: 10.0.0.1,3260] 00:19:19.440 Logging out of session [sid: 37, target: iqn.2016-06.io.spdk:Target15, portal: 10.0.0.1,3260] 00:19:19.440 Logout of [sid: 22, target: iqn.2016-06.io.spdk:Target0, portal: 10.0.0.1,3260] successful. 00:19:19.440 Logout of [sid: 23, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] successful. 00:19:19.440 Logout of [sid: 24, target: iqn.2016-06.io.spdk:Target2, portal: 10.0.0.1,3260] successful. 00:19:19.440 Logout of [sid: 25, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] successful. 00:19:19.440 Logout of [sid: 26, target: iqn.2016-06.io.spdk:Target4, portal: 10.0.0.1,3260] successful. 00:19:19.440 Logout of [sid: 27, target: iqn.2016-06.io.spdk:Target5, portal: 10.0.0.1,3260] successful. 00:19:19.440 Logout of [sid: 28, target: iqn.2016-06.io.spdk:Target6, portal: 10.0.0.1,3260] successful. 00:19:19.440 Logout of [sid: 29, target: iqn.2016-06.io.spdk:Target7, portal: 10.0.0.1,3260] successful. 00:19:19.440 Logout of [sid: 30, target: iqn.2016-06.io.spdk:Target8, portal: 10.0.0.1,3260] successful. 00:19:19.440 Logout of [sid: 31, target: iqn.2016-06.io.spdk:Target9, portal: 10.0.0.1,3260] successful. 00:19:19.440 Logout of [sid: 32, target: iqn.2016-06.io.spdk:Target10, portal: 10.0.0.1,3260] successful. 00:19:19.440 Logout of [sid: 33, target: iqn.2016-06.io.spdk:Target11, portal: 10.0.0.1,3260] successful. 00:19:19.440 Logout of [sid: 34, target: iqn.2016-06.io.spdk:Target12, portal: 10.0.0.1,3260] successful. 00:19:19.440 Logout of [sid: 35, target: iqn.2016-06.io.spdk:Target13, portal: 10.0.0.1,3260] successful. 00:19:19.440 Logout of [sid: 36, target: iqn.2016-06.io.spdk:Target14, portal: 10.0.0.1,3260] successful. 00:19:19.440 Logout of [sid: 37, target: iqn.2016-06.io.spdk:Target15, portal: 10.0.0.1,3260] successful. 00:19:19.440 02:14:28 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@982 -- # iscsiadm -m node -o delete 00:19:19.440 02:14:28 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@983 -- # rm -rf 00:19:19.440 02:14:28 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@84 -- # RPCS= 00:19:19.440 02:14:28 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@86 -- # seq 0 15 00:19:19.440 02:14:28 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@86 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:19:19.440 02:14:28 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@87 -- # RPCS+='iscsi_delete_target_node iqn.2016-06.io.spdk:Target0\n' 00:19:19.440 02:14:28 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@88 -- # RPCS+='bdev_malloc_delete Malloc0\n' 00:19:19.440 02:14:28 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@86 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:19:19.440 02:14:28 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@87 -- # RPCS+='iscsi_delete_target_node iqn.2016-06.io.spdk:Target1\n' 00:19:19.440 02:14:28 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@88 -- # RPCS+='bdev_malloc_delete Malloc1\n' 00:19:19.440 02:14:28 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@86 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:19:19.440 02:14:28 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@87 -- # RPCS+='iscsi_delete_target_node iqn.2016-06.io.spdk:Target2\n' 00:19:19.440 02:14:28 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@88 -- # RPCS+='bdev_malloc_delete Malloc2\n' 00:19:19.440 02:14:28 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@86 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:19:19.440 02:14:28 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@87 -- # RPCS+='iscsi_delete_target_node iqn.2016-06.io.spdk:Target3\n' 00:19:19.440 02:14:28 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@88 -- # RPCS+='bdev_malloc_delete Malloc3\n' 00:19:19.440 02:14:28 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@86 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:19:19.440 02:14:28 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@87 -- # RPCS+='iscsi_delete_target_node iqn.2016-06.io.spdk:Target4\n' 00:19:19.440 02:14:28 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@88 -- # RPCS+='bdev_malloc_delete Malloc4\n' 00:19:19.440 02:14:28 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@86 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:19:19.440 02:14:28 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@87 -- # RPCS+='iscsi_delete_target_node iqn.2016-06.io.spdk:Target5\n' 00:19:19.440 02:14:28 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@88 -- # RPCS+='bdev_malloc_delete Malloc5\n' 00:19:19.440 02:14:28 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@86 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:19:19.440 02:14:28 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@87 -- # RPCS+='iscsi_delete_target_node iqn.2016-06.io.spdk:Target6\n' 00:19:19.440 02:14:28 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@88 -- # RPCS+='bdev_malloc_delete Malloc6\n' 00:19:19.440 02:14:28 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@86 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:19:19.440 02:14:28 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@87 -- # RPCS+='iscsi_delete_target_node iqn.2016-06.io.spdk:Target7\n' 00:19:19.440 02:14:28 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@88 -- # RPCS+='bdev_malloc_delete Malloc7\n' 00:19:19.440 02:14:28 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@86 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:19:19.440 02:14:28 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@87 -- # RPCS+='iscsi_delete_target_node iqn.2016-06.io.spdk:Target8\n' 00:19:19.440 02:14:28 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@88 -- # RPCS+='bdev_malloc_delete Malloc8\n' 00:19:19.440 02:14:28 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@86 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:19:19.440 02:14:28 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@87 -- # RPCS+='iscsi_delete_target_node iqn.2016-06.io.spdk:Target9\n' 00:19:19.440 02:14:28 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@88 -- # RPCS+='bdev_malloc_delete Malloc9\n' 00:19:19.440 02:14:28 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@86 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:19:19.440 02:14:28 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@87 -- # RPCS+='iscsi_delete_target_node iqn.2016-06.io.spdk:Target10\n' 00:19:19.440 02:14:28 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@88 -- # RPCS+='bdev_malloc_delete Malloc10\n' 00:19:19.440 02:14:28 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@86 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:19:19.440 02:14:28 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@87 -- # RPCS+='iscsi_delete_target_node iqn.2016-06.io.spdk:Target11\n' 00:19:19.440 02:14:28 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@88 -- # RPCS+='bdev_malloc_delete Malloc11\n' 00:19:19.440 02:14:28 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@86 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:19:19.440 02:14:28 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@87 -- # RPCS+='iscsi_delete_target_node iqn.2016-06.io.spdk:Target12\n' 00:19:19.440 02:14:28 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@88 -- # RPCS+='bdev_malloc_delete Malloc12\n' 00:19:19.440 02:14:28 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@86 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:19:19.440 02:14:28 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@87 -- # RPCS+='iscsi_delete_target_node iqn.2016-06.io.spdk:Target13\n' 00:19:19.440 02:14:28 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@88 -- # RPCS+='bdev_malloc_delete Malloc13\n' 00:19:19.440 02:14:28 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@86 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:19:19.440 02:14:28 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@87 -- # RPCS+='iscsi_delete_target_node iqn.2016-06.io.spdk:Target14\n' 00:19:19.440 02:14:28 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@88 -- # RPCS+='bdev_malloc_delete Malloc14\n' 00:19:19.440 02:14:28 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@86 -- # for i in $(seq 0 $CONNECTION_NUMBER) 00:19:19.440 02:14:28 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@87 -- # RPCS+='iscsi_delete_target_node iqn.2016-06.io.spdk:Target15\n' 00:19:19.440 02:14:28 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@88 -- # RPCS+='bdev_malloc_delete Malloc15\n' 00:19:19.440 02:14:28 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:19.440 02:14:28 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@90 -- # echo -e iscsi_delete_target_node 'iqn.2016-06.io.spdk:Target0\nbdev_malloc_delete' 'Malloc0\niscsi_delete_target_node' 'iqn.2016-06.io.spdk:Target1\nbdev_malloc_delete' 'Malloc1\niscsi_delete_target_node' 'iqn.2016-06.io.spdk:Target2\nbdev_malloc_delete' 'Malloc2\niscsi_delete_target_node' 'iqn.2016-06.io.spdk:Target3\nbdev_malloc_delete' 'Malloc3\niscsi_delete_target_node' 'iqn.2016-06.io.spdk:Target4\nbdev_malloc_delete' 'Malloc4\niscsi_delete_target_node' 'iqn.2016-06.io.spdk:Target5\nbdev_malloc_delete' 'Malloc5\niscsi_delete_target_node' 'iqn.2016-06.io.spdk:Target6\nbdev_malloc_delete' 'Malloc6\niscsi_delete_target_node' 'iqn.2016-06.io.spdk:Target7\nbdev_malloc_delete' 'Malloc7\niscsi_delete_target_node' 'iqn.2016-06.io.spdk:Target8\nbdev_malloc_delete' 'Malloc8\niscsi_delete_target_node' 'iqn.2016-06.io.spdk:Target9\nbdev_malloc_delete' 'Malloc9\niscsi_delete_target_node' 'iqn.2016-06.io.spdk:Target10\nbdev_malloc_delete' 'Malloc10\niscsi_delete_target_node' 'iqn.2016-06.io.spdk:Target11\nbdev_malloc_delete' 'Malloc11\niscsi_delete_target_node' 'iqn.2016-06.io.spdk:Target12\nbdev_malloc_delete' 'Malloc12\niscsi_delete_target_node' 'iqn.2016-06.io.spdk:Target13\nbdev_malloc_delete' 'Malloc13\niscsi_delete_target_node' 'iqn.2016-06.io.spdk:Target14\nbdev_malloc_delete' 'Malloc14\niscsi_delete_target_node' 'iqn.2016-06.io.spdk:Target15\nbdev_malloc_delete' 'Malloc15\n' 00:19:22.721 02:14:30 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@92 -- # trap 'delete_tmp_files; iscsitestfini; exit 1' SIGINT SIGTERM EXIT 00:19:22.721 02:14:30 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@94 -- # killprocess 77656 00:19:22.721 02:14:30 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@948 -- # '[' -z 77656 ']' 00:19:22.721 02:14:30 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@952 -- # kill -0 77656 00:19:22.721 02:14:30 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@953 -- # uname 00:19:22.721 02:14:30 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:22.721 02:14:30 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 77656 00:19:22.721 02:14:30 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:22.721 killing process with pid 77656 00:19:22.721 02:14:30 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:22.721 02:14:30 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@966 -- # echo 'killing process with pid 77656' 00:19:22.722 02:14:30 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@967 -- # kill 77656 00:19:22.722 02:14:30 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@972 -- # wait 77656 00:19:24.624 02:14:33 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@95 -- # killprocess 77691 00:19:24.624 02:14:33 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@948 -- # '[' -z 77691 ']' 00:19:24.624 02:14:33 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@952 -- # kill -0 77691 00:19:24.624 02:14:33 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@953 -- # uname 00:19:24.624 02:14:33 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:24.624 02:14:33 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 77691 00:19:24.624 02:14:33 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@954 -- # process_name=spdk_trace_reco 00:19:24.624 02:14:33 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@958 -- # '[' spdk_trace_reco = sudo ']' 00:19:24.624 killing process with pid 77691 00:19:24.624 02:14:33 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@966 -- # echo 'killing process with pid 77691' 00:19:24.624 02:14:33 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@967 -- # kill 77691 00:19:24.624 02:14:33 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@972 -- # wait 77691 00:19:24.624 02:14:33 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@96 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_trace -f ./tmp-trace/record.trace 00:19:34.600 02:14:43 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@100 -- # grep 'trace entries for lcore' ./tmp-trace/record.notice 00:19:34.600 02:14:43 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@100 -- # cut -d ' ' -f 2 00:19:34.600 02:14:43 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@100 -- # record_num='113566 00:19:34.600 111423 00:19:34.600 107025 00:19:34.600 110558' 00:19:34.600 02:14:43 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@103 -- # grep 'Trace Size of lcore' ./tmp-trace/trace.log 00:19:34.600 02:14:43 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@103 -- # cut -d ' ' -f 6 00:19:34.600 02:14:43 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@103 -- # trace_tool_num='113566 00:19:34.600 111423 00:19:34.600 107025 00:19:34.600 110558' 00:19:34.600 02:14:43 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@105 -- # delete_tmp_files 00:19:34.600 02:14:43 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@19 -- # rm -rf ./tmp-trace 00:19:34.600 entries numbers from trace record are: 113566 111423 107025 110558 00:19:34.600 entries numbers from trace tool are: 113566 111423 107025 110558 00:19:34.600 02:14:43 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@107 -- # echo 'entries numbers from trace record are:' 113566 111423 107025 110558 00:19:34.600 02:14:43 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@108 -- # echo 'entries numbers from trace tool are:' 113566 111423 107025 110558 00:19:34.600 02:14:43 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@110 -- # arr_record_num=($record_num) 00:19:34.600 02:14:43 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@111 -- # arr_trace_tool_num=($trace_tool_num) 00:19:34.600 02:14:43 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@112 -- # len_arr_record_num=4 00:19:34.600 02:14:43 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@113 -- # len_arr_trace_tool_num=4 00:19:34.600 02:14:43 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@116 -- # '[' 4 -ne 4 ']' 00:19:34.600 02:14:43 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@122 -- # seq 0 3 00:19:34.600 02:14:43 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@122 -- # for i in $(seq 0 $((len_arr_record_num - 1))) 00:19:34.600 02:14:43 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@123 -- # '[' 113566 -le 4096 ']' 00:19:34.600 02:14:43 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@128 -- # '[' 113566 -ne 113566 ']' 00:19:34.600 02:14:43 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@122 -- # for i in $(seq 0 $((len_arr_record_num - 1))) 00:19:34.600 02:14:43 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@123 -- # '[' 111423 -le 4096 ']' 00:19:34.600 02:14:43 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@128 -- # '[' 111423 -ne 111423 ']' 00:19:34.600 02:14:43 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@122 -- # for i in $(seq 0 $((len_arr_record_num - 1))) 00:19:34.600 02:14:43 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@123 -- # '[' 107025 -le 4096 ']' 00:19:34.600 02:14:43 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@128 -- # '[' 107025 -ne 107025 ']' 00:19:34.600 02:14:43 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@122 -- # for i in $(seq 0 $((len_arr_record_num - 1))) 00:19:34.600 02:14:43 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@123 -- # '[' 110558 -le 4096 ']' 00:19:34.600 02:14:43 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@128 -- # '[' 110558 -ne 110558 ']' 00:19:34.600 02:14:43 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@135 -- # trap - SIGINT SIGTERM EXIT 00:19:34.600 02:14:43 iscsi_tgt.iscsi_tgt_trace_record -- trace_record/trace_record.sh@136 -- # iscsitestfini 00:19:34.600 02:14:43 iscsi_tgt.iscsi_tgt_trace_record -- iscsi_tgt/common.sh@131 -- # '[' '' == iso ']' 00:19:34.600 00:19:34.600 real 0m22.244s 00:19:34.600 user 1m1.021s 00:19:34.600 sys 0m4.099s 00:19:34.600 02:14:43 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:34.600 02:14:43 iscsi_tgt.iscsi_tgt_trace_record -- common/autotest_common.sh@10 -- # set +x 00:19:34.600 ************************************ 00:19:34.600 END TEST iscsi_tgt_trace_record 00:19:34.600 ************************************ 00:19:34.600 02:14:43 iscsi_tgt -- common/autotest_common.sh@1142 -- # return 0 00:19:34.600 02:14:43 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@41 -- # run_test iscsi_tgt_login_redirection /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/login_redirection/login_redirection.sh 00:19:34.600 02:14:43 iscsi_tgt -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:19:34.600 02:14:43 iscsi_tgt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:34.600 02:14:43 iscsi_tgt -- common/autotest_common.sh@10 -- # set +x 00:19:34.600 ************************************ 00:19:34.600 START TEST iscsi_tgt_login_redirection 00:19:34.600 ************************************ 00:19:34.600 02:14:43 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/login_redirection/login_redirection.sh 00:19:34.600 * Looking for test storage... 00:19:34.600 * Found test storage at /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/login_redirection 00:19:34.600 02:14:43 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:19:34.600 02:14:43 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:19:34.600 02:14:43 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:19:34.600 02:14:43 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:19:34.600 02:14:43 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:19:34.600 02:14:43 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:19:34.600 02:14:43 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:19:34.600 02:14:43 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:19:34.600 02:14:43 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:19:34.600 02:14:43 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:19:34.601 02:14:43 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:19:34.601 02:14:43 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:19:34.601 02:14:43 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:19:34.601 02:14:43 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:19:34.601 02:14:43 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:19:34.601 02:14:43 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:19:34.601 02:14:43 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:19:34.601 02:14:43 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:19:34.601 02:14:43 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:19:34.601 02:14:43 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:19:34.601 02:14:43 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@12 -- # iscsitestinit 00:19:34.601 02:14:43 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@109 -- # '[' '' == iso ']' 00:19:34.601 02:14:43 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@14 -- # NULL_BDEV_SIZE=64 00:19:34.601 02:14:43 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@15 -- # NULL_BLOCK_SIZE=512 00:19:34.601 02:14:43 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@17 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:34.601 02:14:43 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@18 -- # fio_py=/home/vagrant/spdk_repo/spdk/scripts/fio-wrapper 00:19:34.601 02:14:43 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@20 -- # rpc_addr1=/var/tmp/spdk0.sock 00:19:34.601 02:14:43 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@21 -- # rpc_addr2=/var/tmp/spdk1.sock 00:19:34.601 02:14:43 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@25 -- # timing_enter start_iscsi_tgts 00:19:34.601 02:14:43 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:34.601 02:14:43 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@10 -- # set +x 00:19:34.601 02:14:43 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@28 -- # pid1=78561 00:19:34.601 Process pid: 78561 00:19:34.601 02:14:43 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@29 -- # echo 'Process pid: 78561' 00:19:34.601 02:14:43 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@27 -- # ip netns exec spdk_iscsi_ns /home/vagrant/spdk_repo/spdk/build/bin/iscsi_tgt -r /var/tmp/spdk0.sock -i 0 -m 0x1 --wait-for-rpc 00:19:34.601 02:14:43 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@32 -- # pid2=78562 00:19:34.601 Process pid: 78562 00:19:34.601 02:14:43 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@33 -- # echo 'Process pid: 78562' 00:19:34.601 02:14:43 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@35 -- # trap 'killprocess $pid1; killprocess $pid2; iscsitestfini; exit 1' SIGINT SIGTERM EXIT 00:19:34.601 02:14:43 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@31 -- # ip netns exec spdk_iscsi_ns /home/vagrant/spdk_repo/spdk/build/bin/iscsi_tgt -r /var/tmp/spdk1.sock -i 1 -m 0x2 --wait-for-rpc 00:19:34.601 02:14:43 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@37 -- # waitforlisten 78561 /var/tmp/spdk0.sock 00:19:34.601 02:14:43 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@829 -- # '[' -z 78561 ']' 00:19:34.601 02:14:43 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk0.sock 00:19:34.601 02:14:43 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:34.601 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk0.sock... 00:19:34.601 02:14:43 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk0.sock...' 00:19:34.601 02:14:43 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:34.601 02:14:43 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@10 -- # set +x 00:19:34.859 [2024-07-23 02:14:43.531560] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:19:34.859 [2024-07-23 02:14:43.531793] [ DPDK EAL parameters: iscsi -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:34.859 [2024-07-23 02:14:43.534216] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:19:34.859 [2024-07-23 02:14:43.534420] [ DPDK EAL parameters: iscsi -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:19:35.118 [2024-07-23 02:14:43.715810] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:35.118 [2024-07-23 02:14:43.721515] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:35.377 [2024-07-23 02:14:43.941325] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:35.377 [2024-07-23 02:14:44.013615] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:35.635 02:14:44 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:35.635 02:14:44 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@862 -- # return 0 00:19:35.635 02:14:44 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk0.sock iscsi_set_options -w 0 -o 30 -a 16 00:19:35.894 02:14:44 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk0.sock framework_start_init 00:19:36.858 iscsi_tgt_1 is listening. 00:19:36.858 02:14:45 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@40 -- # echo 'iscsi_tgt_1 is listening.' 00:19:36.858 02:14:45 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@42 -- # waitforlisten 78562 /var/tmp/spdk1.sock 00:19:36.858 02:14:45 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@829 -- # '[' -z 78562 ']' 00:19:36.858 02:14:45 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk1.sock 00:19:36.858 02:14:45 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:36.858 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk1.sock... 00:19:36.858 02:14:45 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk1.sock...' 00:19:36.858 02:14:45 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:36.858 02:14:45 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@10 -- # set +x 00:19:37.128 02:14:45 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:37.128 02:14:45 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@862 -- # return 0 00:19:37.128 02:14:45 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk1.sock iscsi_set_options -w 0 -o 30 -a 16 00:19:37.387 02:14:45 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk1.sock framework_start_init 00:19:38.325 iscsi_tgt_2 is listening. 00:19:38.325 02:14:46 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@45 -- # echo 'iscsi_tgt_2 is listening.' 00:19:38.325 02:14:46 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@47 -- # timing_exit start_iscsi_tgts 00:19:38.325 02:14:46 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:38.325 02:14:46 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@10 -- # set +x 00:19:38.325 02:14:46 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk0.sock iscsi_create_initiator_group 2 ANY 10.0.0.2/32 00:19:38.325 02:14:47 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk0.sock iscsi_create_portal_group 1 10.0.0.1:3260 00:19:38.584 02:14:47 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk0.sock bdev_null_create Null0 64 512 00:19:38.842 Null0 00:19:38.842 02:14:47 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk0.sock iscsi_create_target_node Target1 Target1_alias Null0:0 1:2 64 -d 00:19:39.101 02:14:47 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk1.sock iscsi_create_initiator_group 2 ANY 10.0.0.2/32 00:19:39.359 02:14:47 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk1.sock iscsi_create_portal_group 1 10.0.0.3:3260 -p 00:19:39.359 02:14:48 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk1.sock bdev_null_create Null0 64 512 00:19:39.618 Null0 00:19:39.618 02:14:48 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk1.sock iscsi_create_target_node Target1 Target1_alias Null0:0 1:2 64 -d 00:19:39.876 02:14:48 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@67 -- # iscsiadm -m discovery -t sendtargets -p 10.0.0.1:3260 00:19:39.876 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target1 00:19:39.876 02:14:48 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@68 -- # iscsiadm -m node --login -p 10.0.0.1:3260 00:19:39.876 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] 00:19:39.876 Login to [iface: default, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] successful. 00:19:39.876 02:14:48 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@69 -- # waitforiscsidevices 1 00:19:39.876 02:14:48 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@116 -- # local num=1 00:19:39.876 02:14:48 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@118 -- # (( i = 1 )) 00:19:39.876 02:14:48 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@118 -- # (( i <= 20 )) 00:19:39.876 02:14:48 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@119 -- # grep -c 'Attached scsi disk sd[a-z]*' 00:19:39.876 02:14:48 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@119 -- # iscsiadm -m session -P 3 00:19:39.876 [2024-07-23 02:14:48.540402] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:19:39.876 02:14:48 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@119 -- # n=1 00:19:39.876 02:14:48 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@120 -- # '[' 1 -ne 1 ']' 00:19:39.876 02:14:48 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@123 -- # return 0 00:19:39.876 02:14:48 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@72 -- # fiopid=78665 00:19:39.876 02:14:48 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p iscsi -i 512 -d 1 -t randrw -r 15 00:19:39.876 FIO pid: 78665 00:19:39.876 02:14:48 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@73 -- # echo 'FIO pid: 78665' 00:19:39.876 02:14:48 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@75 -- # trap 'iscsicleanup; killprocess $pid1; killprocess $pid2; killprocess $fiopid; iscsitestfini; exit 1' SIGINT SIGTERM EXIT 00:19:39.876 02:14:48 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk0.sock iscsi_get_connections 00:19:39.876 02:14:48 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@77 -- # jq length 00:19:39.876 [global] 00:19:39.876 thread=1 00:19:39.876 invalidate=1 00:19:39.876 rw=randrw 00:19:39.876 time_based=1 00:19:39.876 runtime=15 00:19:39.876 ioengine=libaio 00:19:39.876 direct=1 00:19:39.876 bs=512 00:19:39.876 iodepth=1 00:19:39.876 norandommap=1 00:19:39.876 numjobs=1 00:19:39.876 00:19:39.876 [job0] 00:19:39.876 filename=/dev/sda 00:19:39.876 queue_depth set to 113 (sda) 00:19:40.134 job0: (g=0): rw=randrw, bs=(R) 512B-512B, (W) 512B-512B, (T) 512B-512B, ioengine=libaio, iodepth=1 00:19:40.134 fio-3.35 00:19:40.134 Starting 1 thread 00:19:40.134 [2024-07-23 02:14:48.710271] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:19:40.134 02:14:48 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@77 -- # '[' 1 = 1 ']' 00:19:40.134 02:14:48 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@78 -- # jq length 00:19:40.134 02:14:48 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk1.sock iscsi_get_connections 00:19:40.392 02:14:49 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@78 -- # '[' 0 = 0 ']' 00:19:40.392 02:14:49 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk0.sock iscsi_target_node_set_redirect iqn.2016-06.io.spdk:Target1 1 -a 10.0.0.3 -p 3260 00:19:40.651 02:14:49 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk0.sock iscsi_target_node_request_logout iqn.2016-06.io.spdk:Target1 -t 1 00:19:40.908 02:14:49 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@85 -- # sleep 5 00:19:46.173 02:14:54 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk0.sock iscsi_get_connections 00:19:46.173 02:14:54 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@87 -- # jq length 00:19:46.173 02:14:54 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@87 -- # '[' 0 = 0 ']' 00:19:46.173 02:14:54 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk1.sock iscsi_get_connections 00:19:46.173 02:14:54 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@88 -- # jq length 00:19:46.432 02:14:55 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@88 -- # '[' 1 = 1 ']' 00:19:46.432 02:14:55 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk0.sock iscsi_target_node_set_redirect iqn.2016-06.io.spdk:Target1 1 00:19:46.690 02:14:55 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk1.sock iscsi_target_node_request_logout iqn.2016-06.io.spdk:Target1 -t 1 00:19:46.690 02:14:55 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@93 -- # sleep 5 00:19:51.959 02:15:00 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk0.sock iscsi_get_connections 00:19:51.959 02:15:00 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@95 -- # jq length 00:19:51.959 02:15:00 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@95 -- # '[' 1 = 1 ']' 00:19:51.959 02:15:00 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk1.sock iscsi_get_connections 00:19:51.959 02:15:00 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@96 -- # jq length 00:19:52.218 02:15:00 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@96 -- # '[' 0 = 0 ']' 00:19:52.218 02:15:00 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@98 -- # wait 78665 00:19:55.502 [2024-07-23 02:15:03.827306] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:19:55.502 00:19:55.502 job0: (groupid=0, jobs=1): err= 0: pid=78694: Tue Jul 23 02:15:03 2024 00:19:55.502 read: IOPS=4054, BW=2027KiB/s (2076kB/s)(29.7MiB/15001msec) 00:19:55.502 slat (nsec): min=3969, max=60073, avg=6587.25, stdev=2230.69 00:19:55.502 clat (usec): min=55, max=2512, avg=82.14, stdev=16.60 00:19:55.503 lat (usec): min=75, max=2521, avg=88.73, stdev=16.97 00:19:55.503 clat percentiles (usec): 00:19:55.503 | 1.00th=[ 73], 5.00th=[ 74], 10.00th=[ 75], 20.00th=[ 77], 00:19:55.503 | 30.00th=[ 78], 40.00th=[ 79], 50.00th=[ 79], 60.00th=[ 81], 00:19:55.503 | 70.00th=[ 83], 80.00th=[ 87], 90.00th=[ 94], 95.00th=[ 102], 00:19:55.503 | 99.00th=[ 120], 99.50th=[ 128], 99.90th=[ 153], 99.95th=[ 204], 00:19:55.503 | 99.99th=[ 510] 00:19:55.503 bw ( KiB/s): min= 478, max= 2893, per=100.00%, avg=2525.91, stdev=635.11, samples=23 00:19:55.503 iops : min= 956, max= 5786, avg=5051.87, stdev=1270.22, samples=23 00:19:55.503 write: IOPS=4043, BW=2022KiB/s (2070kB/s)(29.6MiB/15001msec); 0 zone resets 00:19:55.503 slat (nsec): min=4117, max=58840, avg=6511.54, stdev=2236.17 00:19:55.503 clat (usec): min=70, max=2006.6k, avg=150.38, stdev=11513.56 00:19:55.503 lat (usec): min=79, max=2006.6k, avg=156.89, stdev=11513.61 00:19:55.503 clat percentiles (usec): 00:19:55.503 | 1.00th=[ 75], 5.00th=[ 76], 10.00th=[ 77], 20.00th=[ 79], 00:19:55.503 | 30.00th=[ 80], 40.00th=[ 80], 50.00th=[ 81], 60.00th=[ 83], 00:19:55.503 | 70.00th=[ 85], 80.00th=[ 90], 90.00th=[ 97], 95.00th=[ 105], 00:19:55.503 | 99.00th=[ 123], 99.50th=[ 130], 99.90th=[ 155], 99.95th=[ 215], 00:19:55.503 | 99.99th=[ 644] 00:19:55.503 bw ( KiB/s): min= 518, max= 2920, per=100.00%, avg=2516.35, stdev=627.53, samples=23 00:19:55.503 iops : min= 1036, max= 5840, avg=5032.74, stdev=1255.07, samples=23 00:19:55.503 lat (usec) : 100=93.60%, 250=6.35%, 500=0.03%, 750=0.01%, 1000=0.01% 00:19:55.503 lat (msec) : 2=0.01%, 4=0.01%, >=2000=0.01% 00:19:55.503 cpu : usr=1.81%, sys=6.26%, ctx=121507, majf=0, minf=1 00:19:55.503 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:55.503 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:55.503 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:55.503 issued rwts: total=60827,60659,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:55.503 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:55.503 00:19:55.503 Run status group 0 (all jobs): 00:19:55.503 READ: bw=2027KiB/s (2076kB/s), 2027KiB/s-2027KiB/s (2076kB/s-2076kB/s), io=29.7MiB (31.1MB), run=15001-15001msec 00:19:55.503 WRITE: bw=2022KiB/s (2070kB/s), 2022KiB/s-2022KiB/s (2070kB/s-2070kB/s), io=29.6MiB (31.1MB), run=15001-15001msec 00:19:55.503 00:19:55.503 Disk stats (read/write): 00:19:55.503 sda: ios=60209/60042, merge=0/0, ticks=4983/9109, in_queue=14092, util=99.41% 00:19:55.503 02:15:03 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@100 -- # trap - SIGINT SIGTERM EXIT 00:19:55.503 02:15:03 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@102 -- # iscsicleanup 00:19:55.503 Cleaning up iSCSI connection 00:19:55.503 02:15:03 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@980 -- # echo 'Cleaning up iSCSI connection' 00:19:55.503 02:15:03 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@981 -- # iscsiadm -m node --logout 00:19:55.503 Logging out of session [sid: 38, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] 00:19:55.503 Logout of [sid: 38, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] successful. 00:19:55.503 02:15:03 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@982 -- # iscsiadm -m node -o delete 00:19:55.503 02:15:03 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@983 -- # rm -rf 00:19:55.503 02:15:03 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@103 -- # killprocess 78561 00:19:55.503 02:15:03 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@948 -- # '[' -z 78561 ']' 00:19:55.503 02:15:03 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@952 -- # kill -0 78561 00:19:55.503 02:15:03 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@953 -- # uname 00:19:55.503 02:15:03 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:55.503 02:15:03 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 78561 00:19:55.503 02:15:03 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:55.503 02:15:03 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:55.503 killing process with pid 78561 00:19:55.503 02:15:03 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@966 -- # echo 'killing process with pid 78561' 00:19:55.503 02:15:03 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@967 -- # kill 78561 00:19:55.503 02:15:03 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@972 -- # wait 78561 00:19:57.406 02:15:05 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@104 -- # killprocess 78562 00:19:57.406 02:15:05 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@948 -- # '[' -z 78562 ']' 00:19:57.406 02:15:05 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@952 -- # kill -0 78562 00:19:57.406 02:15:05 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@953 -- # uname 00:19:57.406 02:15:05 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:57.406 02:15:05 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 78562 00:19:57.406 02:15:05 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:57.406 02:15:05 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:57.406 killing process with pid 78562 00:19:57.406 02:15:05 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@966 -- # echo 'killing process with pid 78562' 00:19:57.406 02:15:05 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@967 -- # kill 78562 00:19:57.406 02:15:05 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@972 -- # wait 78562 00:19:59.309 02:15:07 iscsi_tgt.iscsi_tgt_login_redirection -- login_redirection/login_redirection.sh@105 -- # iscsitestfini 00:19:59.309 02:15:07 iscsi_tgt.iscsi_tgt_login_redirection -- iscsi_tgt/common.sh@131 -- # '[' '' == iso ']' 00:19:59.309 00:19:59.309 real 0m24.546s 00:19:59.309 user 0m46.064s 00:19:59.309 sys 0m6.051s 00:19:59.309 02:15:07 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:59.309 02:15:07 iscsi_tgt.iscsi_tgt_login_redirection -- common/autotest_common.sh@10 -- # set +x 00:19:59.309 ************************************ 00:19:59.309 END TEST iscsi_tgt_login_redirection 00:19:59.309 ************************************ 00:19:59.310 02:15:07 iscsi_tgt -- common/autotest_common.sh@1142 -- # return 0 00:19:59.310 02:15:07 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@42 -- # run_test iscsi_tgt_digests /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/digests/digests.sh 00:19:59.310 02:15:07 iscsi_tgt -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:19:59.310 02:15:07 iscsi_tgt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:59.310 02:15:07 iscsi_tgt -- common/autotest_common.sh@10 -- # set +x 00:19:59.310 ************************************ 00:19:59.310 START TEST iscsi_tgt_digests 00:19:59.310 ************************************ 00:19:59.310 02:15:07 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/digests/digests.sh 00:19:59.310 * Looking for test storage... 00:19:59.310 * Found test storage at /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/digests 00:19:59.310 02:15:07 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:19:59.310 02:15:07 iscsi_tgt.iscsi_tgt_digests -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:19:59.310 02:15:07 iscsi_tgt.iscsi_tgt_digests -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:19:59.310 02:15:07 iscsi_tgt.iscsi_tgt_digests -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:19:59.310 02:15:07 iscsi_tgt.iscsi_tgt_digests -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:19:59.310 02:15:07 iscsi_tgt.iscsi_tgt_digests -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:19:59.310 02:15:07 iscsi_tgt.iscsi_tgt_digests -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:19:59.310 02:15:07 iscsi_tgt.iscsi_tgt_digests -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:19:59.310 02:15:07 iscsi_tgt.iscsi_tgt_digests -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:19:59.310 02:15:07 iscsi_tgt.iscsi_tgt_digests -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:19:59.310 02:15:07 iscsi_tgt.iscsi_tgt_digests -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:19:59.310 02:15:07 iscsi_tgt.iscsi_tgt_digests -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:19:59.310 02:15:07 iscsi_tgt.iscsi_tgt_digests -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:19:59.310 02:15:07 iscsi_tgt.iscsi_tgt_digests -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:19:59.310 02:15:07 iscsi_tgt.iscsi_tgt_digests -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:19:59.310 02:15:07 iscsi_tgt.iscsi_tgt_digests -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:19:59.310 02:15:07 iscsi_tgt.iscsi_tgt_digests -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:19:59.310 02:15:07 iscsi_tgt.iscsi_tgt_digests -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:19:59.310 02:15:07 iscsi_tgt.iscsi_tgt_digests -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:19:59.310 02:15:07 iscsi_tgt.iscsi_tgt_digests -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:19:59.310 02:15:07 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@11 -- # iscsitestinit 00:19:59.310 02:15:07 iscsi_tgt.iscsi_tgt_digests -- iscsi_tgt/common.sh@109 -- # '[' '' == iso ']' 00:19:59.310 02:15:07 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@49 -- # MALLOC_BDEV_SIZE=64 00:19:59.310 02:15:07 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@50 -- # MALLOC_BLOCK_SIZE=512 00:19:59.310 02:15:07 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@52 -- # fio_py=/home/vagrant/spdk_repo/spdk/scripts/fio-wrapper 00:19:59.310 02:15:07 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@54 -- # timing_enter start_iscsi_tgt 00:19:59.310 02:15:07 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:59.310 02:15:07 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@10 -- # set +x 00:19:59.310 02:15:07 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@57 -- # pid=78981 00:19:59.310 Process pid: 78981 00:19:59.310 02:15:07 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@56 -- # ip netns exec spdk_iscsi_ns /home/vagrant/spdk_repo/spdk/build/bin/iscsi_tgt -m 0xF --wait-for-rpc 00:19:59.310 02:15:07 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@58 -- # echo 'Process pid: 78981' 00:19:59.310 02:15:07 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@60 -- # trap 'killprocess $pid; iscsitestfini; exit 1' SIGINT SIGTERM EXIT 00:19:59.310 02:15:07 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@62 -- # waitforlisten 78981 00:19:59.310 02:15:07 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@829 -- # '[' -z 78981 ']' 00:19:59.310 02:15:07 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:59.310 02:15:07 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:59.310 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:59.310 02:15:07 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:59.310 02:15:07 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:59.310 02:15:07 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@10 -- # set +x 00:19:59.592 [2024-07-23 02:15:08.109705] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:19:59.592 [2024-07-23 02:15:08.109922] [ DPDK EAL parameters: iscsi --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78981 ] 00:19:59.592 [2024-07-23 02:15:08.283186] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:59.860 [2024-07-23 02:15:08.507119] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:59.860 [2024-07-23 02:15:08.507334] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:59.860 [2024-07-23 02:15:08.507442] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:59.860 [2024-07-23 02:15:08.507907] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:00.427 02:15:08 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:00.427 02:15:08 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@862 -- # return 0 00:20:00.427 02:15:08 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@63 -- # rpc_cmd iscsi_set_options -o 30 -a 16 00:20:00.427 02:15:08 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:00.427 02:15:08 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@10 -- # set +x 00:20:00.427 02:15:09 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:00.427 02:15:09 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@64 -- # rpc_cmd framework_start_init 00:20:00.427 02:15:09 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:00.427 02:15:09 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@10 -- # set +x 00:20:00.993 02:15:09 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:00.993 iscsi_tgt is listening. Running tests... 00:20:00.993 02:15:09 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@65 -- # echo 'iscsi_tgt is listening. Running tests...' 00:20:00.993 02:15:09 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@67 -- # timing_exit start_iscsi_tgt 00:20:00.993 02:15:09 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:00.993 02:15:09 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@10 -- # set +x 00:20:01.250 02:15:09 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@69 -- # rpc_cmd iscsi_create_portal_group 1 10.0.0.1:3260 00:20:01.250 02:15:09 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:01.250 02:15:09 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@10 -- # set +x 00:20:01.250 02:15:09 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:01.251 02:15:09 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@70 -- # rpc_cmd iscsi_create_initiator_group 2 ANY 10.0.0.2/32 00:20:01.251 02:15:09 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:01.251 02:15:09 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@10 -- # set +x 00:20:01.251 02:15:09 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:01.251 02:15:09 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@71 -- # rpc_cmd bdev_malloc_create 64 512 00:20:01.251 02:15:09 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:01.251 02:15:09 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@10 -- # set +x 00:20:01.251 Malloc0 00:20:01.251 02:15:09 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:01.251 02:15:09 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@76 -- # rpc_cmd iscsi_create_target_node Target3 Target3_alias Malloc0:0 1:2 64 -d 00:20:01.251 02:15:09 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:01.251 02:15:09 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@10 -- # set +x 00:20:01.251 02:15:09 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:01.251 02:15:09 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@77 -- # sleep 1 00:20:02.185 02:15:10 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@79 -- # iscsiadm -m discovery -t sendtargets -p 10.0.0.1:3260 00:20:02.185 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target3 00:20:02.185 02:15:10 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@83 -- # iscsiadm -m node -p 10.0.0.1:3260 -o update -n 'node.conn[0].iscsi.DataDigest' -v None 00:20:02.185 02:15:10 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@83 -- # true 00:20:02.185 02:15:10 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@83 -- # DataDigestAbility='iscsiadm: Cannot modify node.conn[0].iscsi.DataDigest. Invalid param name. 00:20:02.185 iscsiadm: Could not execute operation on all records: invalid parameter' 00:20:02.185 02:15:10 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@84 -- # '[' 'iscsiadm: Cannot modify node.conn[0].iscsi.DataDigest. Invalid param name. 00:20:02.185 iscsiadm: Could not execute operation on all records: invalid parameterx' '!=' x ']' 00:20:02.185 02:15:10 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@85 -- # run_test iscsi_tgt_digest iscsi_header_digest_test 00:20:02.185 02:15:10 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:20:02.185 02:15:10 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:02.185 02:15:10 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@10 -- # set +x 00:20:02.185 ************************************ 00:20:02.185 START TEST iscsi_tgt_digest 00:20:02.185 ************************************ 00:20:02.185 02:15:10 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- common/autotest_common.sh@1123 -- # iscsi_header_digest_test 00:20:02.185 02:15:10 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- digests/digests.sh@27 -- # node_login_fio_logout 'HeaderDigest -v CRC32C' 00:20:02.185 02:15:10 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- digests/digests.sh@14 -- # for arg in "$@" 00:20:02.185 02:15:10 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- digests/digests.sh@15 -- # iscsiadm -m node -p 10.0.0.1:3260 -o update -n 'node.conn[0].iscsi.HeaderDigest' -v CRC32C 00:20:02.185 02:15:10 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- digests/digests.sh@17 -- # iscsiadm -m node --login -p 10.0.0.1:3260 00:20:02.185 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] 00:20:02.185 Login to [iface: default, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] successful. 00:20:02.185 02:15:10 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- digests/digests.sh@18 -- # waitforiscsidevices 1 00:20:02.185 02:15:10 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@116 -- # local num=1 00:20:02.185 02:15:10 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@118 -- # (( i = 1 )) 00:20:02.185 02:15:10 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@118 -- # (( i <= 20 )) 00:20:02.185 02:15:10 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@119 -- # iscsiadm -m session -P 3 00:20:02.185 02:15:10 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@119 -- # grep -c 'Attached scsi disk sd[a-z]*' 00:20:02.185 [2024-07-23 02:15:10.954807] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:20:02.185 02:15:10 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@119 -- # n=1 00:20:02.185 02:15:10 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@120 -- # '[' 1 -ne 1 ']' 00:20:02.185 02:15:10 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@123 -- # return 0 00:20:02.185 02:15:10 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- digests/digests.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p iscsi -i 512 -d 1 -t write -r 2 00:20:02.443 [global] 00:20:02.443 thread=1 00:20:02.443 invalidate=1 00:20:02.443 rw=write 00:20:02.443 time_based=1 00:20:02.443 runtime=2 00:20:02.443 ioengine=libaio 00:20:02.443 direct=1 00:20:02.443 bs=512 00:20:02.443 iodepth=1 00:20:02.443 norandommap=1 00:20:02.443 numjobs=1 00:20:02.443 00:20:02.443 [job0] 00:20:02.443 filename=/dev/sda 00:20:02.443 queue_depth set to 113 (sda) 00:20:02.443 job0: (g=0): rw=write, bs=(R) 512B-512B, (W) 512B-512B, (T) 512B-512B, ioengine=libaio, iodepth=1 00:20:02.443 fio-3.35 00:20:02.443 Starting 1 thread 00:20:02.443 [2024-07-23 02:15:11.117150] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:20:04.975 [2024-07-23 02:15:13.226748] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:20:04.975 00:20:04.975 job0: (groupid=0, jobs=1): err= 0: pid=79084: Tue Jul 23 02:15:13 2024 00:20:04.975 write: IOPS=7918, BW=3959KiB/s (4054kB/s)(7923KiB/2001msec); 0 zone resets 00:20:04.975 slat (usec): min=5, max=152, avg= 7.17, stdev= 4.04 00:20:04.975 clat (usec): min=9, max=791, avg=117.86, stdev=17.76 00:20:04.975 lat (usec): min=100, max=799, avg=125.03, stdev=18.29 00:20:04.975 clat percentiles (usec): 00:20:04.975 | 1.00th=[ 99], 5.00th=[ 102], 10.00th=[ 103], 20.00th=[ 106], 00:20:04.975 | 30.00th=[ 109], 40.00th=[ 111], 50.00th=[ 113], 60.00th=[ 116], 00:20:04.975 | 70.00th=[ 122], 80.00th=[ 131], 90.00th=[ 141], 95.00th=[ 151], 00:20:04.975 | 99.00th=[ 165], 99.50th=[ 172], 99.90th=[ 200], 99.95th=[ 255], 00:20:04.975 | 99.99th=[ 717] 00:20:04.975 bw ( KiB/s): min= 3872, max= 4012, per=99.61%, avg=3944.00, stdev=70.09, samples=3 00:20:04.975 iops : min= 7744, max= 8024, avg=7888.00, stdev=140.17, samples=3 00:20:04.975 lat (usec) : 10=0.01%, 100=2.46%, 250=97.48%, 500=0.04%, 750=0.01% 00:20:04.975 lat (usec) : 1000=0.01% 00:20:04.975 cpu : usr=2.95%, sys=7.10%, ctx=15921, majf=0, minf=1 00:20:04.975 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:04.975 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:04.975 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:04.975 issued rwts: total=0,15845,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:04.975 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:04.975 00:20:04.975 Run status group 0 (all jobs): 00:20:04.975 WRITE: bw=3959KiB/s (4054kB/s), 3959KiB/s-3959KiB/s (4054kB/s-4054kB/s), io=7923KiB (8113kB), run=2001-2001msec 00:20:04.975 00:20:04.975 Disk stats (read/write): 00:20:04.975 sda: ios=48/14968, merge=0/0, ticks=12/1742, in_queue=1755, util=95.52% 00:20:04.975 02:15:13 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- digests/digests.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p iscsi -i 512 -d 1 -t read -r 2 00:20:04.975 [global] 00:20:04.975 thread=1 00:20:04.975 invalidate=1 00:20:04.975 rw=read 00:20:04.975 time_based=1 00:20:04.975 runtime=2 00:20:04.975 ioengine=libaio 00:20:04.975 direct=1 00:20:04.975 bs=512 00:20:04.975 iodepth=1 00:20:04.975 norandommap=1 00:20:04.975 numjobs=1 00:20:04.975 00:20:04.975 [job0] 00:20:04.975 filename=/dev/sda 00:20:04.975 queue_depth set to 113 (sda) 00:20:04.975 job0: (g=0): rw=read, bs=(R) 512B-512B, (W) 512B-512B, (T) 512B-512B, ioengine=libaio, iodepth=1 00:20:04.975 fio-3.35 00:20:04.975 Starting 1 thread 00:20:06.877 00:20:06.877 job0: (groupid=0, jobs=1): err= 0: pid=79136: Tue Jul 23 02:15:15 2024 00:20:06.877 read: IOPS=9285, BW=4643KiB/s (4754kB/s)(9291KiB/2001msec) 00:20:06.877 slat (usec): min=5, max=130, avg= 6.96, stdev= 3.16 00:20:06.877 clat (usec): min=3, max=559, avg=99.64, stdev=14.81 00:20:06.877 lat (usec): min=85, max=567, avg=106.60, stdev=15.46 00:20:06.877 clat percentiles (usec): 00:20:06.877 | 1.00th=[ 84], 5.00th=[ 87], 10.00th=[ 89], 20.00th=[ 91], 00:20:06.877 | 30.00th=[ 93], 40.00th=[ 94], 50.00th=[ 96], 60.00th=[ 98], 00:20:06.877 | 70.00th=[ 101], 80.00th=[ 108], 90.00th=[ 117], 95.00th=[ 123], 00:20:06.877 | 99.00th=[ 141], 99.50th=[ 167], 99.90th=[ 219], 99.95th=[ 297], 00:20:06.877 | 99.99th=[ 519] 00:20:06.877 bw ( KiB/s): min= 4521, max= 4833, per=100.00%, avg=4656.00, stdev=160.18, samples=3 00:20:06.877 iops : min= 9042, max= 9666, avg=9312.00, stdev=320.37, samples=3 00:20:06.877 lat (usec) : 4=0.01%, 100=66.37%, 250=33.56%, 500=0.05%, 750=0.01% 00:20:06.877 cpu : usr=2.95%, sys=7.90%, ctx=18610, majf=0, minf=1 00:20:06.877 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:06.877 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:06.877 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:06.877 issued rwts: total=18581,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:06.877 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:06.877 00:20:06.877 Run status group 0 (all jobs): 00:20:06.877 READ: bw=4643KiB/s (4754kB/s), 4643KiB/s-4643KiB/s (4754kB/s-4754kB/s), io=9291KiB (9513kB), run=2001-2001msec 00:20:06.877 00:20:06.877 Disk stats (read/write): 00:20:06.877 sda: ios=17587/0, merge=0/0, ticks=1734/0, in_queue=1734, util=95.07% 00:20:06.877 02:15:15 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- digests/digests.sh@21 -- # iscsiadm -m node --logout -p 10.0.0.1:3260 00:20:06.877 Logging out of session [sid: 39, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] 00:20:06.877 Logout of [sid: 39, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] successful. 00:20:06.877 02:15:15 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- digests/digests.sh@22 -- # waitforiscsidevices 0 00:20:06.878 02:15:15 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@116 -- # local num=0 00:20:06.878 02:15:15 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@118 -- # (( i = 1 )) 00:20:06.878 02:15:15 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@118 -- # (( i <= 20 )) 00:20:06.878 02:15:15 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@119 -- # iscsiadm -m session -P 3 00:20:06.878 02:15:15 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@119 -- # grep -c 'Attached scsi disk sd[a-z]*' 00:20:06.878 iscsiadm: No active sessions. 00:20:06.878 02:15:15 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@119 -- # true 00:20:06.878 02:15:15 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@119 -- # n=0 00:20:06.878 02:15:15 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@120 -- # '[' 0 -ne 0 ']' 00:20:06.878 02:15:15 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@123 -- # return 0 00:20:06.878 02:15:15 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- digests/digests.sh@31 -- # node_login_fio_logout 'HeaderDigest -v CRC32C,None' 00:20:06.878 02:15:15 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- digests/digests.sh@14 -- # for arg in "$@" 00:20:06.878 02:15:15 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- digests/digests.sh@15 -- # iscsiadm -m node -p 10.0.0.1:3260 -o update -n 'node.conn[0].iscsi.HeaderDigest' -v CRC32C,None 00:20:06.878 02:15:15 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- digests/digests.sh@17 -- # iscsiadm -m node --login -p 10.0.0.1:3260 00:20:07.136 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] 00:20:07.136 Login to [iface: default, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] successful. 00:20:07.136 02:15:15 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- digests/digests.sh@18 -- # waitforiscsidevices 1 00:20:07.136 02:15:15 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@116 -- # local num=1 00:20:07.136 02:15:15 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@118 -- # (( i = 1 )) 00:20:07.136 02:15:15 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@118 -- # (( i <= 20 )) 00:20:07.136 02:15:15 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@119 -- # iscsiadm -m session -P 3 00:20:07.136 02:15:15 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@119 -- # grep -c 'Attached scsi disk sd[a-z]*' 00:20:07.136 [2024-07-23 02:15:15.669743] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:20:07.136 02:15:15 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@119 -- # n=1 00:20:07.136 02:15:15 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@120 -- # '[' 1 -ne 1 ']' 00:20:07.136 02:15:15 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@123 -- # return 0 00:20:07.136 02:15:15 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- digests/digests.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p iscsi -i 512 -d 1 -t write -r 2 00:20:07.136 [global] 00:20:07.136 thread=1 00:20:07.136 invalidate=1 00:20:07.136 rw=write 00:20:07.136 time_based=1 00:20:07.136 runtime=2 00:20:07.136 ioengine=libaio 00:20:07.136 direct=1 00:20:07.136 bs=512 00:20:07.136 iodepth=1 00:20:07.136 norandommap=1 00:20:07.136 numjobs=1 00:20:07.136 00:20:07.136 [job0] 00:20:07.136 filename=/dev/sda 00:20:07.136 queue_depth set to 113 (sda) 00:20:07.136 job0: (g=0): rw=write, bs=(R) 512B-512B, (W) 512B-512B, (T) 512B-512B, ioengine=libaio, iodepth=1 00:20:07.136 fio-3.35 00:20:07.136 Starting 1 thread 00:20:07.136 [2024-07-23 02:15:15.829882] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:20:09.671 [2024-07-23 02:15:17.940759] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:20:09.671 00:20:09.672 job0: (groupid=0, jobs=1): err= 0: pid=79208: Tue Jul 23 02:15:17 2024 00:20:09.672 write: IOPS=7759, BW=3880KiB/s (3973kB/s)(7763KiB/2001msec); 0 zone resets 00:20:09.672 slat (nsec): min=4446, max=74511, avg=6311.36, stdev=2129.75 00:20:09.672 clat (usec): min=63, max=1964, avg=121.43, stdev=19.51 00:20:09.672 lat (usec): min=110, max=1977, avg=127.74, stdev=19.90 00:20:09.672 clat percentiles (usec): 00:20:09.672 | 1.00th=[ 108], 5.00th=[ 110], 10.00th=[ 111], 20.00th=[ 113], 00:20:09.672 | 30.00th=[ 115], 40.00th=[ 117], 50.00th=[ 118], 60.00th=[ 121], 00:20:09.672 | 70.00th=[ 124], 80.00th=[ 129], 90.00th=[ 139], 95.00th=[ 145], 00:20:09.672 | 99.00th=[ 159], 99.50th=[ 167], 99.90th=[ 186], 99.95th=[ 200], 00:20:09.672 | 99.99th=[ 644] 00:20:09.672 bw ( KiB/s): min= 3691, max= 3994, per=99.32%, avg=3853.33, stdev=152.66, samples=3 00:20:09.672 iops : min= 7382, max= 7988, avg=7706.67, stdev=305.32, samples=3 00:20:09.672 lat (usec) : 100=0.01%, 250=99.95%, 500=0.03%, 750=0.01% 00:20:09.672 lat (msec) : 2=0.01% 00:20:09.672 cpu : usr=2.50%, sys=6.45%, ctx=15528, majf=0, minf=1 00:20:09.672 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:09.672 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:09.672 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:09.672 issued rwts: total=0,15526,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:09.672 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:09.672 00:20:09.672 Run status group 0 (all jobs): 00:20:09.672 WRITE: bw=3880KiB/s (3973kB/s), 3880KiB/s-3880KiB/s (3973kB/s-3973kB/s), io=7763KiB (7949kB), run=2001-2001msec 00:20:09.672 00:20:09.672 Disk stats (read/write): 00:20:09.672 sda: ios=48/14647, merge=0/0, ticks=11/1756, in_queue=1767, util=95.52% 00:20:09.672 02:15:17 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- digests/digests.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p iscsi -i 512 -d 1 -t read -r 2 00:20:09.672 [global] 00:20:09.672 thread=1 00:20:09.672 invalidate=1 00:20:09.672 rw=read 00:20:09.672 time_based=1 00:20:09.672 runtime=2 00:20:09.672 ioengine=libaio 00:20:09.672 direct=1 00:20:09.672 bs=512 00:20:09.672 iodepth=1 00:20:09.672 norandommap=1 00:20:09.672 numjobs=1 00:20:09.672 00:20:09.672 [job0] 00:20:09.672 filename=/dev/sda 00:20:09.672 queue_depth set to 113 (sda) 00:20:09.672 job0: (g=0): rw=read, bs=(R) 512B-512B, (W) 512B-512B, (T) 512B-512B, ioengine=libaio, iodepth=1 00:20:09.672 fio-3.35 00:20:09.672 Starting 1 thread 00:20:11.577 00:20:11.577 job0: (groupid=0, jobs=1): err= 0: pid=79257: Tue Jul 23 02:15:20 2024 00:20:11.577 read: IOPS=8407, BW=4204KiB/s (4305kB/s)(8412KiB/2001msec) 00:20:11.577 slat (nsec): min=5626, max=48780, avg=7137.79, stdev=3098.01 00:20:11.577 clat (usec): min=87, max=1760, avg=110.56, stdev=18.09 00:20:11.577 lat (usec): min=98, max=1767, avg=117.69, stdev=18.46 00:20:11.577 clat percentiles (usec): 00:20:11.577 | 1.00th=[ 97], 5.00th=[ 99], 10.00th=[ 100], 20.00th=[ 102], 00:20:11.577 | 30.00th=[ 103], 40.00th=[ 105], 50.00th=[ 106], 60.00th=[ 109], 00:20:11.577 | 70.00th=[ 112], 80.00th=[ 119], 90.00th=[ 129], 95.00th=[ 137], 00:20:11.577 | 99.00th=[ 151], 99.50th=[ 157], 99.90th=[ 176], 99.95th=[ 194], 00:20:11.577 | 99.99th=[ 510] 00:20:11.577 bw ( KiB/s): min= 4169, max= 4245, per=100.00%, avg=4208.00, stdev=38.04, samples=3 00:20:11.577 iops : min= 8338, max= 8490, avg=8416.00, stdev=76.08, samples=3 00:20:11.577 lat (usec) : 100=10.49%, 250=89.48%, 500=0.02%, 750=0.01% 00:20:11.577 lat (msec) : 2=0.01% 00:20:11.577 cpu : usr=2.80%, sys=8.10%, ctx=16832, majf=0, minf=1 00:20:11.577 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:11.577 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:11.577 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:11.577 issued rwts: total=16823,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:11.577 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:11.577 00:20:11.577 Run status group 0 (all jobs): 00:20:11.577 READ: bw=4204KiB/s (4305kB/s), 4204KiB/s-4204KiB/s (4305kB/s-4305kB/s), io=8412KiB (8613kB), run=2001-2001msec 00:20:11.577 00:20:11.577 Disk stats (read/write): 00:20:11.577 sda: ios=15891/0, merge=0/0, ticks=1722/0, in_queue=1722, util=95.07% 00:20:11.577 02:15:20 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- digests/digests.sh@21 -- # iscsiadm -m node --logout -p 10.0.0.1:3260 00:20:11.577 Logging out of session [sid: 40, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] 00:20:11.577 Logout of [sid: 40, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] successful. 00:20:11.577 02:15:20 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- digests/digests.sh@22 -- # waitforiscsidevices 0 00:20:11.577 02:15:20 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@116 -- # local num=0 00:20:11.577 02:15:20 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@118 -- # (( i = 1 )) 00:20:11.577 02:15:20 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@118 -- # (( i <= 20 )) 00:20:11.577 02:15:20 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@119 -- # iscsiadm -m session -P 3 00:20:11.577 02:15:20 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@119 -- # grep -c 'Attached scsi disk sd[a-z]*' 00:20:11.577 iscsiadm: No active sessions. 00:20:11.577 02:15:20 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@119 -- # true 00:20:11.577 02:15:20 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@119 -- # n=0 00:20:11.577 02:15:20 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@120 -- # '[' 0 -ne 0 ']' 00:20:11.577 02:15:20 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- iscsi_tgt/common.sh@123 -- # return 0 00:20:11.577 00:20:11.577 real 0m9.430s 00:20:11.577 user 0m0.705s 00:20:11.577 sys 0m0.896s 00:20:11.577 02:15:20 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:11.577 02:15:20 iscsi_tgt.iscsi_tgt_digests.iscsi_tgt_digest -- common/autotest_common.sh@10 -- # set +x 00:20:11.577 ************************************ 00:20:11.577 END TEST iscsi_tgt_digest 00:20:11.577 ************************************ 00:20:11.836 02:15:20 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@1142 -- # return 0 00:20:11.836 02:15:20 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:20:11.836 02:15:20 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@92 -- # iscsicleanup 00:20:11.836 Cleaning up iSCSI connection 00:20:11.836 02:15:20 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@980 -- # echo 'Cleaning up iSCSI connection' 00:20:11.836 02:15:20 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@981 -- # iscsiadm -m node --logout 00:20:11.836 iscsiadm: No matching sessions found 00:20:11.836 02:15:20 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@981 -- # true 00:20:11.836 02:15:20 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@982 -- # iscsiadm -m node -o delete 00:20:11.836 02:15:20 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@983 -- # rm -rf 00:20:11.837 02:15:20 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@93 -- # killprocess 78981 00:20:11.837 02:15:20 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@948 -- # '[' -z 78981 ']' 00:20:11.837 02:15:20 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@952 -- # kill -0 78981 00:20:11.837 02:15:20 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@953 -- # uname 00:20:11.837 02:15:20 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:11.837 02:15:20 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 78981 00:20:11.837 02:15:20 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:11.837 02:15:20 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:11.837 killing process with pid 78981 00:20:11.837 02:15:20 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@966 -- # echo 'killing process with pid 78981' 00:20:11.837 02:15:20 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@967 -- # kill 78981 00:20:11.837 02:15:20 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@972 -- # wait 78981 00:20:14.371 02:15:22 iscsi_tgt.iscsi_tgt_digests -- digests/digests.sh@94 -- # iscsitestfini 00:20:14.371 02:15:22 iscsi_tgt.iscsi_tgt_digests -- iscsi_tgt/common.sh@131 -- # '[' '' == iso ']' 00:20:14.371 00:20:14.371 real 0m14.722s 00:20:14.371 user 0m51.998s 00:20:14.371 sys 0m3.836s 00:20:14.371 02:15:22 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:14.371 02:15:22 iscsi_tgt.iscsi_tgt_digests -- common/autotest_common.sh@10 -- # set +x 00:20:14.371 ************************************ 00:20:14.371 END TEST iscsi_tgt_digests 00:20:14.371 ************************************ 00:20:14.371 02:15:22 iscsi_tgt -- common/autotest_common.sh@1142 -- # return 0 00:20:14.371 02:15:22 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@43 -- # run_test iscsi_tgt_fuzz /home/vagrant/spdk_repo/spdk/test/fuzz/autofuzz_iscsi.sh --timeout=30 00:20:14.371 02:15:22 iscsi_tgt -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:14.371 02:15:22 iscsi_tgt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:14.371 02:15:22 iscsi_tgt -- common/autotest_common.sh@10 -- # set +x 00:20:14.371 ************************************ 00:20:14.371 START TEST iscsi_tgt_fuzz 00:20:14.371 ************************************ 00:20:14.371 02:15:22 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/fuzz/autofuzz_iscsi.sh --timeout=30 00:20:14.371 * Looking for test storage... 00:20:14.371 * Found test storage at /home/vagrant/spdk_repo/spdk/test/fuzz 00:20:14.371 02:15:22 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:20:14.371 02:15:22 iscsi_tgt.iscsi_tgt_fuzz -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:20:14.371 02:15:22 iscsi_tgt.iscsi_tgt_fuzz -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:20:14.371 02:15:22 iscsi_tgt.iscsi_tgt_fuzz -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:20:14.371 02:15:22 iscsi_tgt.iscsi_tgt_fuzz -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:20:14.371 02:15:22 iscsi_tgt.iscsi_tgt_fuzz -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:20:14.371 02:15:22 iscsi_tgt.iscsi_tgt_fuzz -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:20:14.371 02:15:22 iscsi_tgt.iscsi_tgt_fuzz -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:20:14.371 02:15:22 iscsi_tgt.iscsi_tgt_fuzz -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:20:14.371 02:15:22 iscsi_tgt.iscsi_tgt_fuzz -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:20:14.371 02:15:22 iscsi_tgt.iscsi_tgt_fuzz -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:20:14.371 02:15:22 iscsi_tgt.iscsi_tgt_fuzz -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:20:14.371 02:15:22 iscsi_tgt.iscsi_tgt_fuzz -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:20:14.371 02:15:22 iscsi_tgt.iscsi_tgt_fuzz -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:20:14.371 02:15:22 iscsi_tgt.iscsi_tgt_fuzz -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:20:14.371 02:15:22 iscsi_tgt.iscsi_tgt_fuzz -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:20:14.371 02:15:22 iscsi_tgt.iscsi_tgt_fuzz -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:20:14.371 02:15:22 iscsi_tgt.iscsi_tgt_fuzz -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:20:14.371 02:15:22 iscsi_tgt.iscsi_tgt_fuzz -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:20:14.371 02:15:22 iscsi_tgt.iscsi_tgt_fuzz -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:20:14.371 02:15:22 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@11 -- # iscsitestinit 00:20:14.371 02:15:22 iscsi_tgt.iscsi_tgt_fuzz -- iscsi_tgt/common.sh@109 -- # '[' '' == iso ']' 00:20:14.371 02:15:22 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@13 -- # '[' -z 10.0.0.1 ']' 00:20:14.371 02:15:22 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@18 -- # '[' -z 10.0.0.2 ']' 00:20:14.371 02:15:22 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@23 -- # timing_enter iscsi_fuzz_test 00:20:14.371 02:15:22 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:14.371 02:15:22 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@10 -- # set +x 00:20:14.371 02:15:22 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@25 -- # MALLOC_BDEV_SIZE=64 00:20:14.371 02:15:22 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@26 -- # MALLOC_BLOCK_SIZE=4096 00:20:14.371 02:15:22 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@28 -- # TEST_TIMEOUT=1200 00:20:14.371 02:15:22 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@31 -- # for i in "$@" 00:20:14.371 02:15:22 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@32 -- # case "$i" in 00:20:14.371 02:15:22 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@34 -- # TEST_TIMEOUT=30 00:20:14.371 02:15:22 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@39 -- # timing_enter start_iscsi_tgt 00:20:14.371 02:15:22 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:14.371 02:15:22 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@10 -- # set +x 00:20:14.371 02:15:22 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@42 -- # iscsipid=79380 00:20:14.371 Process iscsipid: 79380 00:20:14.371 02:15:22 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@43 -- # echo 'Process iscsipid: 79380' 00:20:14.371 02:15:22 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@45 -- # trap 'killprocess $iscsipid; exit 1' SIGINT SIGTERM EXIT 00:20:14.371 02:15:22 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@47 -- # waitforlisten 79380 00:20:14.371 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:14.371 02:15:22 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@829 -- # '[' -z 79380 ']' 00:20:14.371 02:15:22 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:14.371 02:15:22 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@41 -- # ip netns exec spdk_iscsi_ns /home/vagrant/spdk_repo/spdk/build/bin/iscsi_tgt -m 0xF --disable-cpumask-locks --wait-for-rpc 00:20:14.371 02:15:22 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:14.371 02:15:22 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:14.371 02:15:22 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:14.371 02:15:22 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@10 -- # set +x 00:20:15.311 02:15:23 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:15.311 02:15:23 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@862 -- # return 0 00:20:15.311 02:15:23 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@49 -- # rpc_cmd iscsi_set_options -o 60 -a 16 00:20:15.311 02:15:23 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:15.311 02:15:23 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@10 -- # set +x 00:20:15.311 02:15:23 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:15.311 02:15:23 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@50 -- # rpc_cmd framework_start_init 00:20:15.311 02:15:23 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:15.311 02:15:23 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@10 -- # set +x 00:20:15.879 iscsi_tgt is listening. Running tests... 00:20:15.879 02:15:24 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:15.879 02:15:24 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@51 -- # echo 'iscsi_tgt is listening. Running tests...' 00:20:15.879 02:15:24 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@52 -- # timing_exit start_iscsi_tgt 00:20:15.879 02:15:24 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:15.879 02:15:24 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@10 -- # set +x 00:20:15.879 02:15:24 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@54 -- # rpc_cmd iscsi_create_portal_group 1 10.0.0.1:3260 00:20:15.879 02:15:24 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:15.879 02:15:24 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@10 -- # set +x 00:20:15.879 02:15:24 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:15.879 02:15:24 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@55 -- # rpc_cmd iscsi_create_initiator_group 2 ANY 10.0.0.2/32 00:20:15.879 02:15:24 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:15.879 02:15:24 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@10 -- # set +x 00:20:15.879 02:15:24 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:15.879 02:15:24 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@56 -- # rpc_cmd bdev_malloc_create 64 4096 00:20:15.880 02:15:24 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:15.880 02:15:24 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@10 -- # set +x 00:20:15.880 Malloc0 00:20:15.880 02:15:24 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:15.880 02:15:24 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@57 -- # rpc_cmd iscsi_create_target_node disk1 disk1_alias Malloc0:0 1:2 256 -d 00:20:15.880 02:15:24 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:15.880 02:15:24 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@10 -- # set +x 00:20:15.880 02:15:24 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:15.880 02:15:24 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@58 -- # sleep 1 00:20:16.815 02:15:25 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@60 -- # trap 'killprocess $iscsipid; iscsitestfini; exit 1' SIGINT SIGTERM EXIT 00:20:16.815 02:15:25 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@62 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/iscsi_fuzz/iscsi_fuzz -m 0xF0 -T 10.0.0.1 -t 30 00:20:48.918 Fuzzing completed. Shutting down the fuzz application. 00:20:48.918 00:20:48.918 device 0x6110000160c0 stats: Sent 9170 valid opcode PDUs, 84601 invalid opcode PDUs. 00:20:48.918 02:15:56 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@64 -- # rpc_cmd iscsi_delete_target_node iqn.2016-06.io.spdk:disk1 00:20:48.918 02:15:56 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:48.918 02:15:56 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@10 -- # set +x 00:20:48.918 02:15:56 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:48.918 02:15:56 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@67 -- # rpc_cmd bdev_malloc_delete Malloc0 00:20:48.918 02:15:56 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:48.918 02:15:56 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@10 -- # set +x 00:20:48.918 02:15:56 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:48.918 02:15:56 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:20:48.918 02:15:56 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@71 -- # killprocess 79380 00:20:48.918 02:15:56 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@948 -- # '[' -z 79380 ']' 00:20:48.918 02:15:56 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@952 -- # kill -0 79380 00:20:48.918 02:15:56 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@953 -- # uname 00:20:48.918 02:15:56 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:48.918 02:15:56 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 79380 00:20:48.918 killing process with pid 79380 00:20:48.918 02:15:56 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:48.918 02:15:56 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:48.918 02:15:56 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@966 -- # echo 'killing process with pid 79380' 00:20:48.918 02:15:56 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@967 -- # kill 79380 00:20:48.918 02:15:56 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@972 -- # wait 79380 00:20:50.297 02:15:58 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@73 -- # iscsitestfini 00:20:50.297 02:15:58 iscsi_tgt.iscsi_tgt_fuzz -- iscsi_tgt/common.sh@131 -- # '[' '' == iso ']' 00:20:50.297 02:15:58 iscsi_tgt.iscsi_tgt_fuzz -- fuzz/autofuzz_iscsi.sh@75 -- # timing_exit iscsi_fuzz_test 00:20:50.297 02:15:58 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:50.297 02:15:58 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@10 -- # set +x 00:20:50.297 ************************************ 00:20:50.297 END TEST iscsi_tgt_fuzz 00:20:50.297 ************************************ 00:20:50.297 00:20:50.297 real 0m36.101s 00:20:50.297 user 3m20.524s 00:20:50.297 sys 0m16.513s 00:20:50.297 02:15:58 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:50.297 02:15:58 iscsi_tgt.iscsi_tgt_fuzz -- common/autotest_common.sh@10 -- # set +x 00:20:50.297 02:15:58 iscsi_tgt -- common/autotest_common.sh@1142 -- # return 0 00:20:50.297 02:15:58 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@44 -- # run_test iscsi_tgt_multiconnection /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/multiconnection/multiconnection.sh 00:20:50.297 02:15:58 iscsi_tgt -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:20:50.297 02:15:58 iscsi_tgt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:50.297 02:15:58 iscsi_tgt -- common/autotest_common.sh@10 -- # set +x 00:20:50.297 ************************************ 00:20:50.297 START TEST iscsi_tgt_multiconnection 00:20:50.297 ************************************ 00:20:50.297 02:15:58 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/multiconnection/multiconnection.sh 00:20:50.297 * Looking for test storage... 00:20:50.297 * Found test storage at /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/multiconnection 00:20:50.297 02:15:58 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:20:50.297 02:15:58 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:20:50.297 02:15:58 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:20:50.297 02:15:58 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:20:50.297 02:15:58 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:20:50.297 02:15:58 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:20:50.297 02:15:58 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:20:50.297 02:15:58 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:20:50.297 02:15:58 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:20:50.297 02:15:58 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:20:50.297 02:15:58 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:20:50.297 02:15:58 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:20:50.297 02:15:58 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:20:50.297 02:15:58 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:20:50.297 02:15:58 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:20:50.297 02:15:58 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:20:50.297 02:15:58 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:20:50.297 02:15:58 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:20:50.297 02:15:58 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:20:50.297 02:15:58 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:20:50.297 02:15:58 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@11 -- # iscsitestinit 00:20:50.297 02:15:58 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@109 -- # '[' '' == iso ']' 00:20:50.297 02:15:58 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:50.297 02:15:58 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@16 -- # fio_py=/home/vagrant/spdk_repo/spdk/scripts/fio-wrapper 00:20:50.297 02:15:58 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@18 -- # CONNECTION_NUMBER=30 00:20:50.297 02:15:58 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@40 -- # timing_enter start_iscsi_tgt 00:20:50.297 02:15:58 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:50.297 02:15:58 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:50.297 02:15:58 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@42 -- # iscsipid=79838 00:20:50.297 02:15:58 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@43 -- # echo 'iSCSI target launched. pid: 79838' 00:20:50.297 02:15:58 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@41 -- # ip netns exec spdk_iscsi_ns /home/vagrant/spdk_repo/spdk/build/bin/iscsi_tgt --wait-for-rpc 00:20:50.297 iSCSI target launched. pid: 79838 00:20:50.297 02:15:58 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@44 -- # trap 'remove_backends; iscsicleanup; killprocess $iscsipid; iscsitestfini; exit 1' SIGINT SIGTERM EXIT 00:20:50.297 02:15:58 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@46 -- # waitforlisten 79838 00:20:50.297 02:15:58 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@829 -- # '[' -z 79838 ']' 00:20:50.297 02:15:58 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:50.297 02:15:58 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:50.297 02:15:58 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:50.297 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:50.297 02:15:58 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:50.297 02:15:58 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:50.297 [2024-07-23 02:15:59.035809] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:20:50.297 [2024-07-23 02:15:59.036012] [ DPDK EAL parameters: iscsi --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79838 ] 00:20:50.556 [2024-07-23 02:15:59.214729] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:50.815 [2024-07-23 02:15:59.499199] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:51.383 02:15:59 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:51.383 02:15:59 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@862 -- # return 0 00:20:51.383 02:15:59 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_set_options -o 30 -a 128 00:20:51.383 02:16:00 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:20:52.320 02:16:00 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:20:52.320 02:16:00 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:20:52.581 02:16:01 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@50 -- # timing_exit start_iscsi_tgt 00:20:52.581 02:16:01 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:52.581 02:16:01 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:20:52.581 02:16:01 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_portal_group 1 10.0.0.1:3260 00:20:52.840 02:16:01 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_initiator_group 2 ANY 10.0.0.2/32 00:20:53.099 Creating an iSCSI target node. 00:20:53.099 02:16:01 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@55 -- # echo 'Creating an iSCSI target node.' 00:20:53.099 02:16:01 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs0 -c 1048576 00:20:53.358 02:16:01 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@56 -- # ls_guid=ad091950-9b02-4a25-a5db-2493b0ab2b00 00:20:53.358 02:16:01 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@59 -- # get_lvs_free_mb ad091950-9b02-4a25-a5db-2493b0ab2b00 00:20:53.358 02:16:01 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@1364 -- # local lvs_uuid=ad091950-9b02-4a25-a5db-2493b0ab2b00 00:20:53.358 02:16:01 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@1365 -- # local lvs_info 00:20:53.358 02:16:01 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@1366 -- # local fc 00:20:53.358 02:16:01 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@1367 -- # local cs 00:20:53.358 02:16:01 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@1368 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:20:53.618 02:16:02 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:20:53.618 { 00:20:53.618 "uuid": "ad091950-9b02-4a25-a5db-2493b0ab2b00", 00:20:53.618 "name": "lvs0", 00:20:53.618 "base_bdev": "Nvme0n1", 00:20:53.618 "total_data_clusters": 5099, 00:20:53.618 "free_clusters": 5099, 00:20:53.618 "block_size": 4096, 00:20:53.618 "cluster_size": 1048576 00:20:53.618 } 00:20:53.618 ]' 00:20:53.618 02:16:02 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="ad091950-9b02-4a25-a5db-2493b0ab2b00") .free_clusters' 00:20:53.618 02:16:02 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@1369 -- # fc=5099 00:20:53.618 02:16:02 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="ad091950-9b02-4a25-a5db-2493b0ab2b00") .cluster_size' 00:20:53.618 5099 00:20:53.618 02:16:02 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@1370 -- # cs=1048576 00:20:53.618 02:16:02 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@1373 -- # free_mb=5099 00:20:53.618 02:16:02 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@1374 -- # echo 5099 00:20:53.618 02:16:02 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@60 -- # lvol_bdev_size=169 00:20:53.618 02:16:02 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # seq 1 30 00:20:53.618 02:16:02 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:20:53.618 02:16:02 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u ad091950-9b02-4a25-a5db-2493b0ab2b00 lbd_1 169 00:20:53.877 e7beb211-7587-43a0-98a8-5092276a79e8 00:20:53.877 02:16:02 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:20:53.877 02:16:02 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u ad091950-9b02-4a25-a5db-2493b0ab2b00 lbd_2 169 00:20:54.135 0dc43563-59fa-42e9-9ec1-bba0e65e8927 00:20:54.135 02:16:02 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:20:54.135 02:16:02 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u ad091950-9b02-4a25-a5db-2493b0ab2b00 lbd_3 169 00:20:54.395 4935aed5-b46c-45d0-a2f7-ca95cf345391 00:20:54.395 02:16:02 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:20:54.395 02:16:02 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u ad091950-9b02-4a25-a5db-2493b0ab2b00 lbd_4 169 00:20:54.395 323c1985-7be6-4135-ac96-0f285758837a 00:20:54.395 02:16:03 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:20:54.395 02:16:03 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u ad091950-9b02-4a25-a5db-2493b0ab2b00 lbd_5 169 00:20:54.654 fb481147-cc98-4f5f-bb32-8550c4e2f1d2 00:20:54.654 02:16:03 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:20:54.654 02:16:03 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u ad091950-9b02-4a25-a5db-2493b0ab2b00 lbd_6 169 00:20:54.912 0d3dbfc0-3cef-4c5d-9614-fd6c63b74ff1 00:20:54.912 02:16:03 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:20:54.912 02:16:03 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u ad091950-9b02-4a25-a5db-2493b0ab2b00 lbd_7 169 00:20:55.170 e373c7d4-4d2c-4532-891e-80771f359108 00:20:55.170 02:16:03 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:20:55.170 02:16:03 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u ad091950-9b02-4a25-a5db-2493b0ab2b00 lbd_8 169 00:20:55.429 3bffd938-62b5-42e9-914c-953a2e4d1241 00:20:55.429 02:16:04 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:20:55.429 02:16:04 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u ad091950-9b02-4a25-a5db-2493b0ab2b00 lbd_9 169 00:20:55.688 127efcf1-9125-4679-b032-688738472f77 00:20:55.688 02:16:04 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:20:55.688 02:16:04 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u ad091950-9b02-4a25-a5db-2493b0ab2b00 lbd_10 169 00:20:55.688 5253997b-57b1-40f4-9f4b-443768f6ae80 00:20:55.688 02:16:04 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:20:55.688 02:16:04 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u ad091950-9b02-4a25-a5db-2493b0ab2b00 lbd_11 169 00:20:55.947 2157996a-bdd5-473a-a9ae-c51cecb9cf5a 00:20:55.947 02:16:04 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:20:55.947 02:16:04 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u ad091950-9b02-4a25-a5db-2493b0ab2b00 lbd_12 169 00:20:56.205 a0bb6bce-6837-47a0-b10b-2b3f912afc85 00:20:56.205 02:16:04 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:20:56.205 02:16:04 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u ad091950-9b02-4a25-a5db-2493b0ab2b00 lbd_13 169 00:20:56.465 9363909b-c7cb-40f9-aafe-fe510f7ec47a 00:20:56.465 02:16:05 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:20:56.465 02:16:05 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u ad091950-9b02-4a25-a5db-2493b0ab2b00 lbd_14 169 00:20:56.465 9a244f9a-7a24-4e69-ac2d-45c55557b976 00:20:56.465 02:16:05 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:20:56.465 02:16:05 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u ad091950-9b02-4a25-a5db-2493b0ab2b00 lbd_15 169 00:20:56.724 66a46c06-289a-40e1-b196-b136df2556cb 00:20:56.724 02:16:05 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:20:56.724 02:16:05 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u ad091950-9b02-4a25-a5db-2493b0ab2b00 lbd_16 169 00:20:56.984 9cf821c0-0eed-4afa-bb7d-a922600a6764 00:20:56.984 02:16:05 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:20:56.984 02:16:05 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u ad091950-9b02-4a25-a5db-2493b0ab2b00 lbd_17 169 00:20:57.243 322cab25-17a8-405b-96ed-cd12c4a4ad0b 00:20:57.243 02:16:05 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:20:57.243 02:16:05 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u ad091950-9b02-4a25-a5db-2493b0ab2b00 lbd_18 169 00:20:57.243 64779e35-7935-49e9-a2e8-78d7bbcd1da0 00:20:57.502 02:16:06 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:20:57.502 02:16:06 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u ad091950-9b02-4a25-a5db-2493b0ab2b00 lbd_19 169 00:20:57.502 07e6a6c3-e1c0-49c9-9cd8-7b30092508a3 00:20:57.502 02:16:06 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:20:57.502 02:16:06 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u ad091950-9b02-4a25-a5db-2493b0ab2b00 lbd_20 169 00:20:57.760 74038742-7ef3-4800-a199-8d668d891765 00:20:57.761 02:16:06 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:20:57.761 02:16:06 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u ad091950-9b02-4a25-a5db-2493b0ab2b00 lbd_21 169 00:20:58.019 314a21e9-2396-45d8-a407-e4c56e1f70b3 00:20:58.019 02:16:06 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:20:58.019 02:16:06 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u ad091950-9b02-4a25-a5db-2493b0ab2b00 lbd_22 169 00:20:58.278 b5f88e30-1b68-4025-8cfd-3a183456c0cf 00:20:58.278 02:16:06 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:20:58.278 02:16:06 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u ad091950-9b02-4a25-a5db-2493b0ab2b00 lbd_23 169 00:20:58.278 73cd8c1b-8319-4fe7-824f-96f71264abcf 00:20:58.279 02:16:07 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:20:58.279 02:16:07 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u ad091950-9b02-4a25-a5db-2493b0ab2b00 lbd_24 169 00:20:58.538 a769b013-27aa-440f-9e67-9f119dba8e5b 00:20:58.538 02:16:07 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:20:58.538 02:16:07 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u ad091950-9b02-4a25-a5db-2493b0ab2b00 lbd_25 169 00:20:58.797 c8ba18fa-dd17-44b7-be5d-f987a6e4f38a 00:20:58.797 02:16:07 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:20:58.797 02:16:07 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u ad091950-9b02-4a25-a5db-2493b0ab2b00 lbd_26 169 00:20:59.056 3fd8384b-02d6-4ccb-a77a-5461eb05b7dc 00:20:59.056 02:16:07 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:20:59.056 02:16:07 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u ad091950-9b02-4a25-a5db-2493b0ab2b00 lbd_27 169 00:20:59.056 7faac784-30f5-47ee-bef1-07e910a23f01 00:20:59.056 02:16:07 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:20:59.056 02:16:07 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u ad091950-9b02-4a25-a5db-2493b0ab2b00 lbd_28 169 00:20:59.315 81678814-f14e-43ef-ae74-4fe9f7e2b565 00:20:59.574 02:16:08 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:20:59.574 02:16:08 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u ad091950-9b02-4a25-a5db-2493b0ab2b00 lbd_29 169 00:20:59.574 14b8fb6f-6e90-4daa-b159-bda7aa246ef0 00:20:59.574 02:16:08 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@61 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:20:59.574 02:16:08 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u ad091950-9b02-4a25-a5db-2493b0ab2b00 lbd_30 169 00:20:59.833 4153f8ef-01e7-4f39-b7fe-8b6d20214951 00:20:59.833 02:16:08 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # seq 1 30 00:20:59.833 02:16:08 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:20:59.833 02:16:08 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_1:0 00:20:59.833 02:16:08 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target1 Target1_alias lvs0/lbd_1:0 1:2 256 -d 00:21:00.107 02:16:08 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:21:00.107 02:16:08 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_2:0 00:21:00.107 02:16:08 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target2 Target2_alias lvs0/lbd_2:0 1:2 256 -d 00:21:00.387 02:16:09 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:21:00.387 02:16:09 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_3:0 00:21:00.387 02:16:09 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target3 Target3_alias lvs0/lbd_3:0 1:2 256 -d 00:21:00.645 02:16:09 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:21:00.645 02:16:09 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_4:0 00:21:00.646 02:16:09 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target4 Target4_alias lvs0/lbd_4:0 1:2 256 -d 00:21:00.646 02:16:09 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:21:00.646 02:16:09 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_5:0 00:21:00.646 02:16:09 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target5 Target5_alias lvs0/lbd_5:0 1:2 256 -d 00:21:00.904 02:16:09 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:21:00.904 02:16:09 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_6:0 00:21:00.904 02:16:09 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target6 Target6_alias lvs0/lbd_6:0 1:2 256 -d 00:21:01.163 02:16:09 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:21:01.163 02:16:09 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_7:0 00:21:01.163 02:16:09 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target7 Target7_alias lvs0/lbd_7:0 1:2 256 -d 00:21:01.421 02:16:10 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:21:01.421 02:16:10 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_8:0 00:21:01.421 02:16:10 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target8 Target8_alias lvs0/lbd_8:0 1:2 256 -d 00:21:01.679 02:16:10 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:21:01.679 02:16:10 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_9:0 00:21:01.679 02:16:10 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target9 Target9_alias lvs0/lbd_9:0 1:2 256 -d 00:21:01.679 02:16:10 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:21:01.679 02:16:10 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_10:0 00:21:01.679 02:16:10 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target10 Target10_alias lvs0/lbd_10:0 1:2 256 -d 00:21:01.938 02:16:10 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:21:01.938 02:16:10 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_11:0 00:21:01.938 02:16:10 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target11 Target11_alias lvs0/lbd_11:0 1:2 256 -d 00:21:02.196 02:16:10 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:21:02.196 02:16:10 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_12:0 00:21:02.196 02:16:10 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target12 Target12_alias lvs0/lbd_12:0 1:2 256 -d 00:21:02.454 02:16:11 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:21:02.454 02:16:11 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_13:0 00:21:02.454 02:16:11 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target13 Target13_alias lvs0/lbd_13:0 1:2 256 -d 00:21:02.454 02:16:11 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:21:02.454 02:16:11 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_14:0 00:21:02.454 02:16:11 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target14 Target14_alias lvs0/lbd_14:0 1:2 256 -d 00:21:02.712 02:16:11 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:21:02.712 02:16:11 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_15:0 00:21:02.712 02:16:11 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target15 Target15_alias lvs0/lbd_15:0 1:2 256 -d 00:21:02.970 02:16:11 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:21:02.970 02:16:11 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_16:0 00:21:02.970 02:16:11 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target16 Target16_alias lvs0/lbd_16:0 1:2 256 -d 00:21:02.970 02:16:11 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:21:02.970 02:16:11 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_17:0 00:21:03.229 02:16:11 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target17 Target17_alias lvs0/lbd_17:0 1:2 256 -d 00:21:03.229 02:16:11 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:21:03.229 02:16:11 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_18:0 00:21:03.229 02:16:11 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target18 Target18_alias lvs0/lbd_18:0 1:2 256 -d 00:21:03.488 02:16:12 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:21:03.488 02:16:12 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_19:0 00:21:03.488 02:16:12 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target19 Target19_alias lvs0/lbd_19:0 1:2 256 -d 00:21:03.746 02:16:12 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:21:03.746 02:16:12 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_20:0 00:21:03.746 02:16:12 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target20 Target20_alias lvs0/lbd_20:0 1:2 256 -d 00:21:03.746 02:16:12 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:21:03.746 02:16:12 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_21:0 00:21:03.746 02:16:12 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target21 Target21_alias lvs0/lbd_21:0 1:2 256 -d 00:21:04.005 02:16:12 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:21:04.005 02:16:12 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_22:0 00:21:04.005 02:16:12 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target22 Target22_alias lvs0/lbd_22:0 1:2 256 -d 00:21:04.263 02:16:12 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:21:04.263 02:16:12 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_23:0 00:21:04.264 02:16:12 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target23 Target23_alias lvs0/lbd_23:0 1:2 256 -d 00:21:04.264 02:16:13 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:21:04.264 02:16:13 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_24:0 00:21:04.264 02:16:13 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target24 Target24_alias lvs0/lbd_24:0 1:2 256 -d 00:21:04.522 02:16:13 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:21:04.522 02:16:13 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_25:0 00:21:04.522 02:16:13 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target25 Target25_alias lvs0/lbd_25:0 1:2 256 -d 00:21:04.779 02:16:13 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:21:04.779 02:16:13 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_26:0 00:21:04.779 02:16:13 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target26 Target26_alias lvs0/lbd_26:0 1:2 256 -d 00:21:05.037 02:16:13 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:21:05.037 02:16:13 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_27:0 00:21:05.037 02:16:13 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target27 Target27_alias lvs0/lbd_27:0 1:2 256 -d 00:21:05.037 02:16:13 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:21:05.037 02:16:13 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_28:0 00:21:05.037 02:16:13 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target28 Target28_alias lvs0/lbd_28:0 1:2 256 -d 00:21:05.295 02:16:13 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:21:05.295 02:16:13 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_29:0 00:21:05.295 02:16:13 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target29 Target29_alias lvs0/lbd_29:0 1:2 256 -d 00:21:05.553 02:16:14 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@65 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:21:05.553 02:16:14 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@66 -- # lun=lvs0/lbd_30:0 00:21:05.553 02:16:14 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target30 Target30_alias lvs0/lbd_30:0 1:2 256 -d 00:21:05.811 02:16:14 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@69 -- # sleep 1 00:21:06.745 Logging into iSCSI target. 00:21:06.745 02:16:15 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@71 -- # echo 'Logging into iSCSI target.' 00:21:06.745 02:16:15 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@72 -- # iscsiadm -m discovery -t sendtargets -p 10.0.0.1:3260 00:21:06.745 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target1 00:21:06.745 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target2 00:21:06.745 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target3 00:21:06.745 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target4 00:21:06.745 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target5 00:21:06.745 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target6 00:21:06.745 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target7 00:21:06.745 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target8 00:21:06.745 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target9 00:21:06.745 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target10 00:21:06.745 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target11 00:21:06.745 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target12 00:21:06.745 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target13 00:21:06.745 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target14 00:21:06.745 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target15 00:21:06.745 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target16 00:21:06.745 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target17 00:21:06.745 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target18 00:21:06.745 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target19 00:21:06.745 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target20 00:21:06.745 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target21 00:21:06.745 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target22 00:21:06.745 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target23 00:21:06.745 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target24 00:21:06.745 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target25 00:21:06.745 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target26 00:21:06.745 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target27 00:21:06.745 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target28 00:21:06.745 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target29 00:21:06.745 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target30 00:21:06.745 02:16:15 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@73 -- # iscsiadm -m node --login -p 10.0.0.1:3260 00:21:06.745 [2024-07-23 02:16:15.457850] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:06.745 [2024-07-23 02:16:15.495751] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:06.745 [2024-07-23 02:16:15.499047] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:07.004 [2024-07-23 02:16:15.537177] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:07.004 [2024-07-23 02:16:15.547893] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:07.004 [2024-07-23 02:16:15.572591] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:07.004 [2024-07-23 02:16:15.618590] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:07.004 [2024-07-23 02:16:15.664781] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:07.004 [2024-07-23 02:16:15.666175] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:07.004 [2024-07-23 02:16:15.714028] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:07.004 [2024-07-23 02:16:15.755079] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:07.004 [2024-07-23 02:16:15.776351] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:07.262 [2024-07-23 02:16:15.808650] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:07.262 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] 00:21:07.262 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target2, portal: 10.0.0.1,3260] 00:21:07.262 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] 00:21:07.262 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target4, portal: 10.0.0.1,3260] 00:21:07.262 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target5, portal: 10.0.0.1,3260] 00:21:07.262 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target6, portal: 10.0.0.1,3260] 00:21:07.262 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target7, portal: 10.0.0.1,3260] 00:21:07.262 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target8, portal: 10.0.0.1,3260] 00:21:07.262 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target9, portal: 10.0.0.1,3260] 00:21:07.262 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target10, portal: 10.0.0.1,3260] 00:21:07.262 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target11, portal: 10.0.0.1,3260] 00:21:07.262 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target12, portal: 10.0.0.1,3260] 00:21:07.262 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target13, portal: 10.0.0.1,3260] 00:21:07.262 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target14, portal: 10.0.0.1,3260] 00:21:07.262 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target15, portal: 10.0.0.1,3260] 00:21:07.262 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target16, portal: 10.0.0.1,3260] 00:21:07.262 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target17, portal: 10.0.0.1,3260] 00:21:07.262 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target18, portal: 10.0.0.1,3260] 00:21:07.262 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target19, portal: 10.0.0.1,3260] 00:21:07.262 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target20, portal: 10.0.0.1,3260] 00:21:07.262 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target21, portal: 10.0.0.1,3260] 00:21:07.262 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target22, portal: 10.0.0.1,3260] 00:21:07.262 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target23, portal: 10.0.0.1,3260] 00:21:07.262 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target24, portal: 10.0.0.1,3260] 00:21:07.262 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target25, portal: 10.0.0.1,3260] 00:21:07.262 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target26, portal: 10.0.0.1,3260] 00:21:07.262 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target27, portal: 10.0.0.1,3260] 00:21:07.262 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target28, portal: 10.0.0.1,3260] 00:21:07.262 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target29, portal: 10.0.0.1,3260] 00:21:07.262 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target30, portal: 10.0.0.1,3260] 00:21:07.262 Login to [iface: default, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] successful. 00:21:07.262 Login to [iface: default, target: iqn.2016-06.io.spdk:Target2, portal: 10.0.0.1,3260] successful. 00:21:07.262 Login to [iface: default, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] successful. 00:21:07.262 Login to [iface: default, target: iqn.2016-06.io.spdk:Target4, portal: 10.0.0.1,3260] successful. 00:21:07.262 Login to [iface: default, target: iqn.2016-06.io.spdk:Target5, portal: 10.0.0.1,3260] successful. 00:21:07.263 Login to [iface: default, target: iqn.2016-06.io.spdk:Target6, portal: 10.0.0.1,3260] successful. 00:21:07.263 Login to [iface: default, target: iqn.2016-06.io.spdk:Target7, portal: 10.0.0.1,3260] successful. 00:21:07.263 Login to [iface: default, target: iqn.2016-06.io.spdk:Target8, portal: 10.0.0.1,3260] successful. 00:21:07.263 Login to [iface: default, target: iqn.2016-06.io.spdk:Target9, portal: 10.0.0.1,3260] successful. 00:21:07.263 Login to [iface: default, target: iqn.2016-06.io.spdk:Target10, portal: 10.0.0.1,3260] successful. 00:21:07.263 Login to [iface: default, target: iqn.2016-06.io.spdk:Target11, portal: 10.0.0.1,3260] successful. 00:21:07.263 Login to [iface: default, target: iqn.2016-06.io.spdk:Target12, portal: 10.0.0.1,3260] successful. 00:21:07.263 Login to [iface: default, target: iqn.2016-06.io.spdk:Target13, portal: 10.0.0.1,3260] successful. 00:21:07.263 Login to [iface: default, target: iqn.2016-06.io.spdk:Target14, por[2024-07-23 02:16:15.838458] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:07.263 [2024-07-23 02:16:15.868435] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:07.263 [2024-07-23 02:16:15.898449] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:07.263 [2024-07-23 02:16:15.933391] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:07.263 [2024-07-23 02:16:15.972118] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:07.263 [2024-07-23 02:16:15.993601] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:07.263 [2024-07-23 02:16:16.022833] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:07.521 [2024-07-23 02:16:16.054792] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:07.521 [2024-07-23 02:16:16.088535] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:07.521 [2024-07-23 02:16:16.119329] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:07.521 [2024-07-23 02:16:16.147901] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:07.521 [2024-07-23 02:16:16.174549] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:07.521 [2024-07-23 02:16:16.217592] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:07.521 [2024-07-23 02:16:16.245032] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:07.521 [2024-07-23 02:16:16.267418] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:07.521 tal: 10.0.0.1,3260] successful. 00:21:07.521 Login to [iface: default, target: iqn.2016-06.io.spdk:Target15, portal: 10.0.0.1,3260] successful. 00:21:07.521 Login to [iface: default, target: iqn.2016-06.io.spdk:Target16, portal: 10.0.0.1,3260] successful. 00:21:07.521 Login to [iface: default, target: iqn.2016-06.io.spdk:Target17, portal: 10.0.0.1,3260] successful. 00:21:07.521 Login to [iface: default, target: iqn.2016-06.io.spdk:Target18, portal: 10.0.0.1,3260] successful. 00:21:07.521 Login to [iface: default, target: iqn.2016-06.io.spdk:Target19, portal: 10.0.0.1,3260] successful. 00:21:07.521 Login to [iface: default, target: iqn.2016-06.io.spdk:Target20, portal: 10.0.0.1,3260] successful. 00:21:07.521 Login to [iface: default, target: iqn.2016-06.io.spdk:Target21, portal: 10.0.0.1,3260] successful. 00:21:07.521 Login to [iface: default, target: iqn.2016-06.io.spdk:Target22, portal: 10.0.0.1,3260] successful. 00:21:07.521 Login to [iface: default, target: iqn.2016-06.io.spdk:Target23, portal: 10.0.0.1,3260] successful. 00:21:07.521 Login to [iface: default, target: iqn.2016-06.io.spdk:Target24, portal: 10.0.0.1,3260] successful. 00:21:07.521 Login to [iface: default, target: iqn.2016-06.io.spdk:Target25, portal: 10.0.0.1,3260] successful. 00:21:07.521 Login to [iface: default, target: iqn.2016-06.io.spdk:Target26, portal: 10.0.0.1,3260] successful. 00:21:07.521 Login to [iface: default, target: iqn.2016-06.io.spdk:Target27, portal: 10.0.0.1,3260] successful. 00:21:07.521 Login to [iface: default, target: iqn.2016-06.io.spdk:Target28, portal: 10.0.0.1,3260] successful. 00:21:07.521 Login to [iface: default, target: iqn.2016-06.io.spdk:Target29, portal: 10.0.0.1,3260] successful. 00:21:07.521 Login to [iface: default, target: iqn.2016-06.io.spdk:Target30, portal: 10.0.0.1,3260] successful. 00:21:07.521 02:16:16 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@74 -- # waitforiscsidevices 30 00:21:07.521 02:16:16 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@116 -- # local num=30 00:21:07.521 02:16:16 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@118 -- # (( i = 1 )) 00:21:07.780 02:16:16 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@118 -- # (( i <= 20 )) 00:21:07.780 02:16:16 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@119 -- # iscsiadm -m session -P 3 00:21:07.780 02:16:16 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@119 -- # grep -c 'Attached scsi disk sd[a-z]*' 00:21:07.780 [2024-07-23 02:16:16.301670] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:07.780 [2024-07-23 02:16:16.326282] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:07.780 02:16:16 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@119 -- # n=30 00:21:07.780 02:16:16 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@120 -- # '[' 30 -ne 30 ']' 00:21:07.780 02:16:16 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@123 -- # return 0 00:21:07.780 Running FIO 00:21:07.780 02:16:16 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@76 -- # echo 'Running FIO' 00:21:07.780 02:16:16 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p iscsi -i 131072 -d 64 -t randrw -r 5 00:21:07.780 [global] 00:21:07.780 thread=1 00:21:07.780 invalidate=1 00:21:07.780 rw=randrw 00:21:07.780 time_based=1 00:21:07.780 runtime=5 00:21:07.780 ioengine=libaio 00:21:07.780 direct=1 00:21:07.780 bs=131072 00:21:07.780 iodepth=64 00:21:07.780 norandommap=1 00:21:07.780 numjobs=1 00:21:07.780 00:21:07.780 [job0] 00:21:07.780 filename=/dev/sda 00:21:07.780 [job1] 00:21:07.780 filename=/dev/sdb 00:21:07.780 [job2] 00:21:07.780 filename=/dev/sdc 00:21:07.780 [job3] 00:21:07.780 filename=/dev/sdd 00:21:07.780 [job4] 00:21:07.780 filename=/dev/sde 00:21:07.780 [job5] 00:21:07.780 filename=/dev/sdf 00:21:07.780 [job6] 00:21:07.780 filename=/dev/sdg 00:21:07.780 [job7] 00:21:07.780 filename=/dev/sdh 00:21:07.780 [job8] 00:21:07.780 filename=/dev/sdi 00:21:07.780 [job9] 00:21:07.780 filename=/dev/sdj 00:21:07.780 [job10] 00:21:07.780 filename=/dev/sdk 00:21:07.780 [job11] 00:21:07.780 filename=/dev/sdl 00:21:07.780 [job12] 00:21:07.780 filename=/dev/sdm 00:21:07.780 [job13] 00:21:07.780 filename=/dev/sdn 00:21:07.780 [job14] 00:21:07.780 filename=/dev/sdo 00:21:07.780 [job15] 00:21:07.780 filename=/dev/sdp 00:21:07.780 [job16] 00:21:07.780 filename=/dev/sdq 00:21:07.780 [job17] 00:21:07.780 filename=/dev/sdr 00:21:07.780 [job18] 00:21:07.780 filename=/dev/sds 00:21:07.780 [job19] 00:21:07.780 filename=/dev/sdt 00:21:07.780 [job20] 00:21:07.780 filename=/dev/sdu 00:21:07.780 [job21] 00:21:07.780 filename=/dev/sdv 00:21:07.780 [job22] 00:21:07.780 filename=/dev/sdw 00:21:07.780 [job23] 00:21:07.780 filename=/dev/sdx 00:21:07.780 [job24] 00:21:07.780 filename=/dev/sdy 00:21:07.780 [job25] 00:21:07.780 filename=/dev/sdz 00:21:07.780 [job26] 00:21:07.780 filename=/dev/sdaa 00:21:07.780 [job27] 00:21:07.780 filename=/dev/sdab 00:21:07.780 [job28] 00:21:07.780 filename=/dev/sdac 00:21:07.780 [job29] 00:21:07.780 filename=/dev/sdad 00:21:08.347 queue_depth set to 113 (sda) 00:21:08.347 queue_depth set to 113 (sdb) 00:21:08.347 queue_depth set to 113 (sdc) 00:21:08.347 queue_depth set to 113 (sdd) 00:21:08.347 queue_depth set to 113 (sde) 00:21:08.347 queue_depth set to 113 (sdf) 00:21:08.347 queue_depth set to 113 (sdg) 00:21:08.347 queue_depth set to 113 (sdh) 00:21:08.347 queue_depth set to 113 (sdi) 00:21:08.347 queue_depth set to 113 (sdj) 00:21:08.606 queue_depth set to 113 (sdk) 00:21:08.606 queue_depth set to 113 (sdl) 00:21:08.606 queue_depth set to 113 (sdm) 00:21:08.606 queue_depth set to 113 (sdn) 00:21:08.606 queue_depth set to 113 (sdo) 00:21:08.606 queue_depth set to 113 (sdp) 00:21:08.606 queue_depth set to 113 (sdq) 00:21:08.606 queue_depth set to 113 (sdr) 00:21:08.606 queue_depth set to 113 (sds) 00:21:08.606 queue_depth set to 113 (sdt) 00:21:08.606 queue_depth set to 113 (sdu) 00:21:08.606 queue_depth set to 113 (sdv) 00:21:08.606 queue_depth set to 113 (sdw) 00:21:08.606 queue_depth set to 113 (sdx) 00:21:08.606 queue_depth set to 113 (sdy) 00:21:08.865 queue_depth set to 113 (sdz) 00:21:08.865 queue_depth set to 113 (sdaa) 00:21:08.865 queue_depth set to 113 (sdab) 00:21:08.865 queue_depth set to 113 (sdac) 00:21:08.865 queue_depth set to 113 (sdad) 00:21:08.865 job0: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:21:08.865 job1: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:21:08.865 job2: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:21:08.865 job3: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:21:08.865 job4: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:21:08.865 job5: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:21:08.865 job6: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:21:08.865 job7: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:21:08.865 job8: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:21:08.865 job9: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:21:08.865 job10: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:21:08.865 job11: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:21:08.865 job12: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:21:08.865 job13: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:21:08.865 job14: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:21:08.865 job15: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:21:08.865 job16: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:21:08.865 job17: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:21:08.865 job18: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:21:08.865 job19: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:21:08.865 job20: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:21:08.865 job21: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:21:08.865 job22: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:21:08.865 job23: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:21:08.865 job24: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:21:08.865 job25: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:21:08.865 job26: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:21:08.865 job27: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:21:08.865 job28: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:21:08.865 job29: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=64 00:21:08.865 fio-3.35 00:21:08.865 Starting 30 threads 00:21:08.865 [2024-07-23 02:16:17.625614] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:08.865 [2024-07-23 02:16:17.628265] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:08.865 [2024-07-23 02:16:17.630721] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:08.865 [2024-07-23 02:16:17.633417] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:08.865 [2024-07-23 02:16:17.637073] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:08.865 [2024-07-23 02:16:17.640451] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:09.124 [2024-07-23 02:16:17.643444] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:09.124 [2024-07-23 02:16:17.646513] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:09.124 [2024-07-23 02:16:17.649754] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:09.124 [2024-07-23 02:16:17.652768] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:09.124 [2024-07-23 02:16:17.655636] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:09.124 [2024-07-23 02:16:17.658825] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:09.124 [2024-07-23 02:16:17.661486] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:09.124 [2024-07-23 02:16:17.664712] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:09.124 [2024-07-23 02:16:17.667964] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:09.124 [2024-07-23 02:16:17.670536] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:09.124 [2024-07-23 02:16:17.673423] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:09.124 [2024-07-23 02:16:17.676184] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:09.124 [2024-07-23 02:16:17.678634] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:09.124 [2024-07-23 02:16:17.682110] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:09.125 [2024-07-23 02:16:17.685123] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:09.125 [2024-07-23 02:16:17.688312] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:09.125 [2024-07-23 02:16:17.691044] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:09.125 [2024-07-23 02:16:17.694155] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:09.125 [2024-07-23 02:16:17.697104] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:09.125 [2024-07-23 02:16:17.699932] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:09.125 [2024-07-23 02:16:17.703328] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:09.125 [2024-07-23 02:16:17.706152] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:09.125 [2024-07-23 02:16:17.709058] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:09.125 [2024-07-23 02:16:17.711715] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:15.696 [2024-07-23 02:16:23.726638] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:15.696 [2024-07-23 02:16:23.739387] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:15.696 [2024-07-23 02:16:23.742738] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:15.696 [2024-07-23 02:16:23.745822] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:15.696 [2024-07-23 02:16:23.747974] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:15.696 [2024-07-23 02:16:23.750476] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:15.696 [2024-07-23 02:16:23.752688] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:15.696 [2024-07-23 02:16:23.754954] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:15.696 [2024-07-23 02:16:23.757221] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:15.696 [2024-07-23 02:16:23.759478] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:15.696 [2024-07-23 02:16:23.761691] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:15.696 [2024-07-23 02:16:23.763806] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:15.696 [2024-07-23 02:16:23.766067] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:15.696 [2024-07-23 02:16:23.768164] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:15.696 [2024-07-23 02:16:23.770354] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:15.696 [2024-07-23 02:16:23.772594] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:15.696 [2024-07-23 02:16:23.774804] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:15.696 [2024-07-23 02:16:23.777120] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:15.696 00:21:15.696 job0: (groupid=0, jobs=1): err= 0: pid=80743: Tue Jul 23 02:16:23 2024 00:21:15.696 read: IOPS=64, BW=8291KiB/s (8490kB/s)(44.8MiB/5527msec) 00:21:15.696 slat (nsec): min=8291, max=69924, avg=27829.15, stdev=12411.11 00:21:15.696 clat (msec): min=3, max=558, avg=70.92, stdev=62.13 00:21:15.696 lat (msec): min=3, max=558, avg=70.95, stdev=62.13 00:21:15.696 clat percentiles (msec): 00:21:15.696 | 1.00th=[ 6], 5.00th=[ 35], 10.00th=[ 52], 20.00th=[ 55], 00:21:15.696 | 30.00th=[ 56], 40.00th=[ 56], 50.00th=[ 57], 60.00th=[ 58], 00:21:15.696 | 70.00th=[ 59], 80.00th=[ 62], 90.00th=[ 94], 95.00th=[ 232], 00:21:15.696 | 99.00th=[ 266], 99.50th=[ 542], 99.90th=[ 558], 99.95th=[ 558], 00:21:15.696 | 99.99th=[ 558] 00:21:15.696 bw ( KiB/s): min= 6144, max=15616, per=3.38%, avg=9086.50, stdev=2816.34, samples=10 00:21:15.696 iops : min= 48, max= 122, avg=70.90, stdev=22.06, samples=10 00:21:15.696 write: IOPS=70, BW=9009KiB/s (9225kB/s)(48.6MiB/5527msec); 0 zone resets 00:21:15.696 slat (usec): min=10, max=112, avg=34.78, stdev=13.56 00:21:15.696 clat (msec): min=48, max=1370, avg=842.45, stdev=153.56 00:21:15.696 lat (msec): min=48, max=1370, avg=842.48, stdev=153.56 00:21:15.696 clat percentiles (msec): 00:21:15.696 | 1.00th=[ 241], 5.00th=[ 558], 10.00th=[ 684], 20.00th=[ 818], 00:21:15.696 | 30.00th=[ 835], 40.00th=[ 852], 50.00th=[ 860], 60.00th=[ 860], 00:21:15.696 | 70.00th=[ 877], 80.00th=[ 885], 90.00th=[ 919], 95.00th=[ 1070], 00:21:15.696 | 99.00th=[ 1334], 99.50th=[ 1351], 99.90th=[ 1368], 99.95th=[ 1368], 00:21:15.696 | 99.99th=[ 1368] 00:21:15.696 bw ( KiB/s): min= 3584, max= 9216, per=3.12%, avg=8420.50, stdev=1724.59, samples=10 00:21:15.696 iops : min= 28, max= 72, avg=65.70, stdev=13.43, samples=10 00:21:15.696 lat (msec) : 4=0.27%, 10=0.40%, 20=1.20%, 50=2.95%, 100=38.82% 00:21:15.696 lat (msec) : 250=3.48%, 500=2.28%, 750=4.82%, 1000=42.44%, 2000=3.35% 00:21:15.696 cpu : usr=0.33%, sys=0.33%, ctx=428, majf=0, minf=1 00:21:15.696 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.1%, 16=2.1%, 32=4.3%, >=64=91.6% 00:21:15.696 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:15.696 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:15.696 issued rwts: total=358,389,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:15.696 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:15.696 job1: (groupid=0, jobs=1): err= 0: pid=80745: Tue Jul 23 02:16:23 2024 00:21:15.696 read: IOPS=62, BW=8004KiB/s (8196kB/s)(43.1MiB/5517msec) 00:21:15.696 slat (usec): min=8, max=618, avg=43.91, stdev=74.45 00:21:15.696 clat (msec): min=40, max=548, avg=72.55, stdev=50.11 00:21:15.696 lat (msec): min=40, max=548, avg=72.59, stdev=50.10 00:21:15.696 clat percentiles (msec): 00:21:15.696 | 1.00th=[ 43], 5.00th=[ 53], 10.00th=[ 55], 20.00th=[ 56], 00:21:15.696 | 30.00th=[ 56], 40.00th=[ 57], 50.00th=[ 57], 60.00th=[ 58], 00:21:15.696 | 70.00th=[ 60], 80.00th=[ 63], 90.00th=[ 136], 95.00th=[ 157], 00:21:15.696 | 99.00th=[ 241], 99.50th=[ 535], 99.90th=[ 550], 99.95th=[ 550], 00:21:15.696 | 99.99th=[ 550] 00:21:15.696 bw ( KiB/s): min= 4864, max=14562, per=3.26%, avg=8775.60, stdev=2842.35, samples=10 00:21:15.696 iops : min= 38, max= 113, avg=68.40, stdev=21.97, samples=10 00:21:15.696 write: IOPS=70, BW=9025KiB/s (9242kB/s)(48.6MiB/5517msec); 0 zone resets 00:21:15.696 slat (usec): min=9, max=860, avg=43.49, stdev=62.24 00:21:15.696 clat (msec): min=230, max=1360, avg=841.82, stdev=146.74 00:21:15.696 lat (msec): min=230, max=1360, avg=841.86, stdev=146.75 00:21:15.696 clat percentiles (msec): 00:21:15.696 | 1.00th=[ 300], 5.00th=[ 550], 10.00th=[ 693], 20.00th=[ 810], 00:21:15.696 | 30.00th=[ 827], 40.00th=[ 844], 50.00th=[ 852], 60.00th=[ 860], 00:21:15.696 | 70.00th=[ 877], 80.00th=[ 885], 90.00th=[ 919], 95.00th=[ 1070], 00:21:15.696 | 99.00th=[ 1334], 99.50th=[ 1351], 99.90th=[ 1368], 99.95th=[ 1368], 00:21:15.696 | 99.99th=[ 1368] 00:21:15.696 bw ( KiB/s): min= 3065, max= 9216, per=3.11%, avg=8394.20, stdev=1889.45, samples=10 00:21:15.696 iops : min= 23, max= 72, avg=65.40, stdev=15.02, samples=10 00:21:15.696 lat (msec) : 50=1.23%, 100=40.33%, 250=5.45%, 500=1.36%, 750=5.18% 00:21:15.696 lat (msec) : 1000=42.92%, 2000=3.54% 00:21:15.696 cpu : usr=0.25%, sys=0.38%, ctx=468, majf=0, minf=1 00:21:15.696 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.1%, 16=2.2%, 32=4.4%, >=64=91.4% 00:21:15.696 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:15.696 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:15.696 issued rwts: total=345,389,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:15.696 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:15.696 job2: (groupid=0, jobs=1): err= 0: pid=80751: Tue Jul 23 02:16:23 2024 00:21:15.696 read: IOPS=73, BW=9351KiB/s (9575kB/s)(50.2MiB/5503msec) 00:21:15.696 slat (usec): min=8, max=705, avg=32.25, stdev=45.66 00:21:15.696 clat (msec): min=39, max=532, avg=71.33, stdev=53.39 00:21:15.696 lat (msec): min=39, max=532, avg=71.36, stdev=53.39 00:21:15.696 clat percentiles (msec): 00:21:15.696 | 1.00th=[ 41], 5.00th=[ 46], 10.00th=[ 53], 20.00th=[ 55], 00:21:15.697 | 30.00th=[ 56], 40.00th=[ 57], 50.00th=[ 57], 60.00th=[ 58], 00:21:15.697 | 70.00th=[ 59], 80.00th=[ 62], 90.00th=[ 97], 95.00th=[ 182], 00:21:15.697 | 99.00th=[ 232], 99.50th=[ 518], 99.90th=[ 535], 99.95th=[ 535], 00:21:15.697 | 99.99th=[ 535] 00:21:15.697 bw ( KiB/s): min= 7168, max=13824, per=3.79%, avg=10186.40, stdev=2614.11, samples=10 00:21:15.697 iops : min= 56, max= 108, avg=79.50, stdev=20.37, samples=10 00:21:15.697 write: IOPS=70, BW=9048KiB/s (9265kB/s)(48.6MiB/5503msec); 0 zone resets 00:21:15.697 slat (nsec): min=9504, max=87327, avg=34854.09, stdev=13467.36 00:21:15.697 clat (msec): min=231, max=1297, avg=830.13, stdev=132.82 00:21:15.697 lat (msec): min=231, max=1297, avg=830.16, stdev=132.82 00:21:15.697 clat percentiles (msec): 00:21:15.697 | 1.00th=[ 326], 5.00th=[ 550], 10.00th=[ 693], 20.00th=[ 810], 00:21:15.697 | 30.00th=[ 827], 40.00th=[ 835], 50.00th=[ 844], 60.00th=[ 852], 00:21:15.697 | 70.00th=[ 869], 80.00th=[ 877], 90.00th=[ 902], 95.00th=[ 1011], 00:21:15.697 | 99.00th=[ 1250], 99.50th=[ 1284], 99.90th=[ 1301], 99.95th=[ 1301], 00:21:15.697 | 99.99th=[ 1301] 00:21:15.697 bw ( KiB/s): min= 3072, max= 9472, per=3.12%, avg=8420.60, stdev=1898.07, samples=10 00:21:15.697 iops : min= 24, max= 74, avg=65.70, stdev=14.82, samples=10 00:21:15.697 lat (msec) : 50=3.79%, 100=42.35%, 250=4.55%, 500=1.26%, 750=4.93% 00:21:15.697 lat (msec) : 1000=40.46%, 2000=2.65% 00:21:15.697 cpu : usr=0.22%, sys=0.42%, ctx=445, majf=0, minf=1 00:21:15.697 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.0%, 16=2.0%, 32=4.0%, >=64=92.0% 00:21:15.697 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:15.697 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:15.697 issued rwts: total=402,389,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:15.697 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:15.697 job3: (groupid=0, jobs=1): err= 0: pid=80752: Tue Jul 23 02:16:23 2024 00:21:15.697 read: IOPS=67, BW=8628KiB/s (8835kB/s)(46.5MiB/5519msec) 00:21:15.697 slat (nsec): min=11899, max=87374, avg=34198.02, stdev=16535.27 00:21:15.697 clat (msec): min=15, max=552, avg=70.25, stdev=51.91 00:21:15.697 lat (msec): min=15, max=552, avg=70.29, stdev=51.91 00:21:15.697 clat percentiles (msec): 00:21:15.697 | 1.00th=[ 30], 5.00th=[ 51], 10.00th=[ 54], 20.00th=[ 55], 00:21:15.697 | 30.00th=[ 56], 40.00th=[ 57], 50.00th=[ 58], 60.00th=[ 59], 00:21:15.697 | 70.00th=[ 60], 80.00th=[ 62], 90.00th=[ 110], 95.00th=[ 138], 00:21:15.697 | 99.00th=[ 245], 99.50th=[ 527], 99.90th=[ 550], 99.95th=[ 550], 00:21:15.697 | 99.99th=[ 550] 00:21:15.697 bw ( KiB/s): min= 5888, max=15104, per=3.51%, avg=9444.70, stdev=2448.87, samples=10 00:21:15.697 iops : min= 46, max= 118, avg=73.70, stdev=19.18, samples=10 00:21:15.697 write: IOPS=70, BW=9022KiB/s (9238kB/s)(48.6MiB/5519msec); 0 zone resets 00:21:15.697 slat (usec): min=13, max=1011, avg=45.16, stdev=58.61 00:21:15.697 clat (msec): min=226, max=1304, avg=839.21, stdev=139.43 00:21:15.697 lat (msec): min=226, max=1304, avg=839.25, stdev=139.43 00:21:15.697 clat percentiles (msec): 00:21:15.697 | 1.00th=[ 275], 5.00th=[ 550], 10.00th=[ 684], 20.00th=[ 818], 00:21:15.697 | 30.00th=[ 835], 40.00th=[ 844], 50.00th=[ 852], 60.00th=[ 860], 00:21:15.697 | 70.00th=[ 869], 80.00th=[ 885], 90.00th=[ 902], 95.00th=[ 1083], 00:21:15.697 | 99.00th=[ 1217], 99.50th=[ 1284], 99.90th=[ 1301], 99.95th=[ 1301], 00:21:15.697 | 99.99th=[ 1301] 00:21:15.697 bw ( KiB/s): min= 3072, max= 9472, per=3.12%, avg=8420.50, stdev=1901.32, samples=10 00:21:15.697 iops : min= 24, max= 74, avg=65.70, stdev=14.82, samples=10 00:21:15.697 lat (msec) : 20=0.26%, 50=2.37%, 100=41.00%, 250=5.12%, 500=1.31% 00:21:15.697 lat (msec) : 750=5.26%, 1000=41.39%, 2000=3.29% 00:21:15.697 cpu : usr=0.13%, sys=0.47%, ctx=440, majf=0, minf=1 00:21:15.697 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.1%, 16=2.1%, 32=4.2%, >=64=91.7% 00:21:15.697 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:15.697 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:15.697 issued rwts: total=372,389,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:15.697 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:15.697 job4: (groupid=0, jobs=1): err= 0: pid=80771: Tue Jul 23 02:16:23 2024 00:21:15.697 read: IOPS=69, BW=8846KiB/s (9058kB/s)(47.4MiB/5484msec) 00:21:15.697 slat (usec): min=9, max=351, avg=28.82, stdev=20.46 00:21:15.697 clat (msec): min=38, max=523, avg=78.27, stdev=61.51 00:21:15.697 lat (msec): min=38, max=523, avg=78.30, stdev=61.51 00:21:15.697 clat percentiles (msec): 00:21:15.697 | 1.00th=[ 43], 5.00th=[ 54], 10.00th=[ 55], 20.00th=[ 56], 00:21:15.697 | 30.00th=[ 56], 40.00th=[ 57], 50.00th=[ 58], 60.00th=[ 59], 00:21:15.697 | 70.00th=[ 61], 80.00th=[ 83], 90.00th=[ 138], 95.00th=[ 178], 00:21:15.697 | 99.00th=[ 510], 99.50th=[ 523], 99.90th=[ 523], 99.95th=[ 523], 00:21:15.697 | 99.99th=[ 523] 00:21:15.697 bw ( KiB/s): min= 6400, max=19751, per=3.56%, avg=9576.50, stdev=3854.98, samples=10 00:21:15.697 iops : min= 50, max= 154, avg=74.70, stdev=30.04, samples=10 00:21:15.697 write: IOPS=70, BW=9080KiB/s (9297kB/s)(48.6MiB/5484msec); 0 zone resets 00:21:15.697 slat (usec): min=10, max=648, avg=37.42, stdev=33.49 00:21:15.697 clat (msec): min=203, max=1301, avg=824.52, stdev=147.82 00:21:15.697 lat (msec): min=203, max=1301, avg=824.56, stdev=147.82 00:21:15.697 clat percentiles (msec): 00:21:15.697 | 1.00th=[ 284], 5.00th=[ 535], 10.00th=[ 659], 20.00th=[ 768], 00:21:15.697 | 30.00th=[ 827], 40.00th=[ 835], 50.00th=[ 852], 60.00th=[ 860], 00:21:15.697 | 70.00th=[ 869], 80.00th=[ 877], 90.00th=[ 902], 95.00th=[ 1070], 00:21:15.697 | 99.00th=[ 1267], 99.50th=[ 1301], 99.90th=[ 1301], 99.95th=[ 1301], 00:21:15.697 | 99.99th=[ 1301] 00:21:15.697 bw ( KiB/s): min= 3591, max= 9472, per=3.14%, avg=8472.50, stdev=1739.62, samples=10 00:21:15.697 iops : min= 28, max= 74, avg=66.10, stdev=13.58, samples=10 00:21:15.697 lat (msec) : 50=1.04%, 100=40.10%, 250=7.94%, 500=1.69%, 750=8.33% 00:21:15.697 lat (msec) : 1000=37.63%, 2000=3.26% 00:21:15.697 cpu : usr=0.20%, sys=0.46%, ctx=432, majf=0, minf=1 00:21:15.697 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.0%, 16=2.1%, 32=4.2%, >=64=91.8% 00:21:15.697 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:15.697 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:15.697 issued rwts: total=379,389,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:15.697 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:15.697 job5: (groupid=0, jobs=1): err= 0: pid=80774: Tue Jul 23 02:16:23 2024 00:21:15.697 read: IOPS=72, BW=9339KiB/s (9563kB/s)(50.1MiB/5496msec) 00:21:15.697 slat (usec): min=8, max=861, avg=41.99, stdev=77.41 00:21:15.697 clat (msec): min=41, max=544, avg=72.26, stdev=56.75 00:21:15.697 lat (msec): min=41, max=544, avg=72.30, stdev=56.75 00:21:15.697 clat percentiles (msec): 00:21:15.697 | 1.00th=[ 44], 5.00th=[ 53], 10.00th=[ 54], 20.00th=[ 55], 00:21:15.697 | 30.00th=[ 56], 40.00th=[ 57], 50.00th=[ 57], 60.00th=[ 58], 00:21:15.697 | 70.00th=[ 59], 80.00th=[ 63], 90.00th=[ 112], 95.00th=[ 182], 00:21:15.697 | 99.00th=[ 234], 99.50th=[ 518], 99.90th=[ 542], 99.95th=[ 542], 00:21:15.697 | 99.99th=[ 542] 00:21:15.697 bw ( KiB/s): min= 6400, max=14592, per=3.78%, avg=10160.90, stdev=2852.17, samples=10 00:21:15.697 iops : min= 50, max= 114, avg=79.30, stdev=22.24, samples=10 00:21:15.697 write: IOPS=70, BW=9036KiB/s (9253kB/s)(48.5MiB/5496msec); 0 zone resets 00:21:15.697 slat (usec): min=9, max=678, avg=43.15, stdev=67.71 00:21:15.697 clat (msec): min=227, max=1325, avg=830.34, stdev=140.78 00:21:15.697 lat (msec): min=228, max=1325, avg=830.39, stdev=140.79 00:21:15.697 clat percentiles (msec): 00:21:15.697 | 1.00th=[ 317], 5.00th=[ 542], 10.00th=[ 676], 20.00th=[ 810], 00:21:15.697 | 30.00th=[ 827], 40.00th=[ 835], 50.00th=[ 844], 60.00th=[ 852], 00:21:15.697 | 70.00th=[ 860], 80.00th=[ 869], 90.00th=[ 894], 95.00th=[ 1083], 00:21:15.697 | 99.00th=[ 1284], 99.50th=[ 1318], 99.90th=[ 1334], 99.95th=[ 1334], 00:21:15.697 | 99.99th=[ 1334] 00:21:15.697 bw ( KiB/s): min= 3328, max= 9216, per=3.12%, avg=8420.70, stdev=1807.96, samples=10 00:21:15.697 iops : min= 26, max= 72, avg=65.70, stdev=14.13, samples=10 00:21:15.697 lat (msec) : 50=2.03%, 100=43.35%, 250=5.20%, 500=1.39%, 750=4.94% 00:21:15.697 lat (msec) : 1000=40.05%, 2000=3.04% 00:21:15.697 cpu : usr=0.13%, sys=0.38%, ctx=560, majf=0, minf=1 00:21:15.697 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.0%, 16=2.0%, 32=4.1%, >=64=92.0% 00:21:15.697 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:15.697 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:15.697 issued rwts: total=401,388,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:15.697 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:15.697 job6: (groupid=0, jobs=1): err= 0: pid=80800: Tue Jul 23 02:16:23 2024 00:21:15.697 read: IOPS=69, BW=8872KiB/s (9085kB/s)(47.8MiB/5511msec) 00:21:15.697 slat (usec): min=6, max=561, avg=29.83, stdev=38.70 00:21:15.697 clat (msec): min=15, max=549, avg=70.09, stdev=57.98 00:21:15.697 lat (msec): min=15, max=549, avg=70.12, stdev=57.97 00:21:15.697 clat percentiles (msec): 00:21:15.697 | 1.00th=[ 26], 5.00th=[ 45], 10.00th=[ 52], 20.00th=[ 55], 00:21:15.697 | 30.00th=[ 56], 40.00th=[ 57], 50.00th=[ 57], 60.00th=[ 58], 00:21:15.697 | 70.00th=[ 60], 80.00th=[ 62], 90.00th=[ 92], 95.00th=[ 146], 00:21:15.697 | 99.00th=[ 535], 99.50th=[ 550], 99.90th=[ 550], 99.95th=[ 550], 00:21:15.697 | 99.99th=[ 550] 00:21:15.697 bw ( KiB/s): min= 5620, max=14336, per=3.60%, avg=9675.60, stdev=2932.64, samples=10 00:21:15.697 iops : min= 43, max= 112, avg=75.50, stdev=23.05, samples=10 00:21:15.697 write: IOPS=70, BW=8965KiB/s (9181kB/s)(48.2MiB/5511msec); 0 zone resets 00:21:15.697 slat (usec): min=6, max=699, avg=35.24, stdev=48.46 00:21:15.697 clat (msec): min=194, max=1316, avg=842.84, stdev=144.66 00:21:15.697 lat (msec): min=194, max=1316, avg=842.87, stdev=144.67 00:21:15.697 clat percentiles (msec): 00:21:15.697 | 1.00th=[ 271], 5.00th=[ 575], 10.00th=[ 709], 20.00th=[ 818], 00:21:15.697 | 30.00th=[ 835], 40.00th=[ 844], 50.00th=[ 852], 60.00th=[ 860], 00:21:15.697 | 70.00th=[ 869], 80.00th=[ 885], 90.00th=[ 902], 95.00th=[ 1116], 00:21:15.697 | 99.00th=[ 1284], 99.50th=[ 1318], 99.90th=[ 1318], 99.95th=[ 1318], 00:21:15.697 | 99.99th=[ 1318] 00:21:15.697 bw ( KiB/s): min= 3072, max= 9453, per=3.11%, avg=8369.30, stdev=1884.04, samples=10 00:21:15.698 iops : min= 24, max= 73, avg=65.30, stdev=14.67, samples=10 00:21:15.698 lat (msec) : 20=0.26%, 50=4.04%, 100=40.89%, 250=4.17%, 500=1.43% 00:21:15.698 lat (msec) : 750=4.56%, 1000=41.41%, 2000=3.26% 00:21:15.698 cpu : usr=0.15%, sys=0.36%, ctx=485, majf=0, minf=1 00:21:15.698 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.0%, 16=2.1%, 32=4.2%, >=64=91.8% 00:21:15.698 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:15.698 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:15.698 issued rwts: total=382,386,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:15.698 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:15.698 job7: (groupid=0, jobs=1): err= 0: pid=80803: Tue Jul 23 02:16:23 2024 00:21:15.698 read: IOPS=64, BW=8315KiB/s (8514kB/s)(45.0MiB/5542msec) 00:21:15.698 slat (nsec): min=8873, max=65454, avg=26639.37, stdev=10797.16 00:21:15.698 clat (msec): min=5, max=546, avg=67.06, stdev=45.14 00:21:15.698 lat (msec): min=5, max=546, avg=67.08, stdev=45.14 00:21:15.698 clat percentiles (msec): 00:21:15.698 | 1.00th=[ 10], 5.00th=[ 44], 10.00th=[ 54], 20.00th=[ 55], 00:21:15.698 | 30.00th=[ 56], 40.00th=[ 57], 50.00th=[ 57], 60.00th=[ 58], 00:21:15.698 | 70.00th=[ 59], 80.00th=[ 61], 90.00th=[ 82], 95.00th=[ 142], 00:21:15.698 | 99.00th=[ 249], 99.50th=[ 257], 99.90th=[ 550], 99.95th=[ 550], 00:21:15.698 | 99.99th=[ 550] 00:21:15.698 bw ( KiB/s): min= 6400, max=15360, per=3.42%, avg=9188.60, stdev=2535.95, samples=10 00:21:15.698 iops : min= 50, max= 120, avg=71.70, stdev=19.83, samples=10 00:21:15.698 write: IOPS=70, BW=9031KiB/s (9247kB/s)(48.9MiB/5542msec); 0 zone resets 00:21:15.698 slat (usec): min=9, max=2797, avg=41.73, stdev=140.40 00:21:15.698 clat (msec): min=121, max=1326, avg=843.74, stdev=148.39 00:21:15.698 lat (msec): min=121, max=1326, avg=843.79, stdev=148.40 00:21:15.698 clat percentiles (msec): 00:21:15.698 | 1.00th=[ 259], 5.00th=[ 575], 10.00th=[ 684], 20.00th=[ 827], 00:21:15.698 | 30.00th=[ 835], 40.00th=[ 844], 50.00th=[ 860], 60.00th=[ 860], 00:21:15.698 | 70.00th=[ 869], 80.00th=[ 885], 90.00th=[ 919], 95.00th=[ 1083], 00:21:15.698 | 99.00th=[ 1284], 99.50th=[ 1318], 99.90th=[ 1334], 99.95th=[ 1334], 00:21:15.698 | 99.99th=[ 1334] 00:21:15.698 bw ( KiB/s): min= 3072, max= 9472, per=3.12%, avg=8420.50, stdev=1901.32, samples=10 00:21:15.698 iops : min= 24, max= 74, avg=65.70, stdev=14.82, samples=10 00:21:15.698 lat (msec) : 10=0.53%, 20=0.80%, 50=1.73%, 100=41.01%, 250=3.86% 00:21:15.698 lat (msec) : 500=1.46%, 750=4.93%, 1000=42.34%, 2000=3.33% 00:21:15.698 cpu : usr=0.11%, sys=0.51%, ctx=432, majf=0, minf=1 00:21:15.698 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.1%, 16=2.1%, 32=4.3%, >=64=91.6% 00:21:15.698 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:15.698 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:15.698 issued rwts: total=360,391,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:15.698 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:15.698 job8: (groupid=0, jobs=1): err= 0: pid=80804: Tue Jul 23 02:16:23 2024 00:21:15.698 read: IOPS=62, BW=8025KiB/s (8217kB/s)(43.1MiB/5503msec) 00:21:15.698 slat (usec): min=8, max=3888, avg=42.21, stdev=210.74 00:21:15.698 clat (msec): min=42, max=526, avg=77.00, stdev=59.60 00:21:15.698 lat (msec): min=42, max=526, avg=77.04, stdev=59.60 00:21:15.698 clat percentiles (msec): 00:21:15.698 | 1.00th=[ 51], 5.00th=[ 54], 10.00th=[ 55], 20.00th=[ 56], 00:21:15.698 | 30.00th=[ 56], 40.00th=[ 57], 50.00th=[ 58], 60.00th=[ 58], 00:21:15.698 | 70.00th=[ 60], 80.00th=[ 65], 90.00th=[ 140], 95.00th=[ 178], 00:21:15.698 | 99.00th=[ 514], 99.50th=[ 514], 99.90th=[ 527], 99.95th=[ 527], 00:21:15.698 | 99.99th=[ 527] 00:21:15.698 bw ( KiB/s): min= 5376, max=16063, per=3.24%, avg=8721.80, stdev=2937.76, samples=10 00:21:15.698 iops : min= 42, max= 125, avg=68.00, stdev=22.90, samples=10 00:21:15.698 write: IOPS=70, BW=9048KiB/s (9265kB/s)(48.6MiB/5503msec); 0 zone resets 00:21:15.698 slat (usec): min=13, max=1120, avg=49.38, stdev=87.48 00:21:15.698 clat (msec): min=220, max=1289, avg=834.19, stdev=145.26 00:21:15.698 lat (msec): min=221, max=1289, avg=834.24, stdev=145.25 00:21:15.698 clat percentiles (msec): 00:21:15.698 | 1.00th=[ 296], 5.00th=[ 550], 10.00th=[ 684], 20.00th=[ 802], 00:21:15.698 | 30.00th=[ 827], 40.00th=[ 835], 50.00th=[ 852], 60.00th=[ 860], 00:21:15.698 | 70.00th=[ 877], 80.00th=[ 894], 90.00th=[ 927], 95.00th=[ 1083], 00:21:15.698 | 99.00th=[ 1267], 99.50th=[ 1284], 99.90th=[ 1284], 99.95th=[ 1284], 00:21:15.698 | 99.99th=[ 1284] 00:21:15.698 bw ( KiB/s): min= 3314, max= 9472, per=3.13%, avg=8444.80, stdev=1822.33, samples=10 00:21:15.698 iops : min= 25, max= 74, avg=65.80, stdev=14.51, samples=10 00:21:15.698 lat (msec) : 50=0.41%, 100=39.24%, 250=7.22%, 500=1.50%, 750=6.54% 00:21:15.698 lat (msec) : 1000=42.10%, 2000=3.00% 00:21:15.698 cpu : usr=0.22%, sys=0.40%, ctx=500, majf=0, minf=1 00:21:15.698 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.1%, 16=2.2%, 32=4.4%, >=64=91.4% 00:21:15.698 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:15.698 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:15.698 issued rwts: total=345,389,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:15.698 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:15.698 job9: (groupid=0, jobs=1): err= 0: pid=80805: Tue Jul 23 02:16:23 2024 00:21:15.698 read: IOPS=75, BW=9683KiB/s (9916kB/s)(52.0MiB/5499msec) 00:21:15.698 slat (nsec): min=10289, max=98854, avg=27545.97, stdev=13057.44 00:21:15.698 clat (msec): min=37, max=539, avg=77.36, stdev=58.06 00:21:15.698 lat (msec): min=37, max=539, avg=77.39, stdev=58.06 00:21:15.698 clat percentiles (msec): 00:21:15.698 | 1.00th=[ 42], 5.00th=[ 51], 10.00th=[ 54], 20.00th=[ 56], 00:21:15.698 | 30.00th=[ 56], 40.00th=[ 57], 50.00th=[ 58], 60.00th=[ 59], 00:21:15.698 | 70.00th=[ 61], 80.00th=[ 83], 90.00th=[ 136], 95.00th=[ 182], 00:21:15.698 | 99.00th=[ 222], 99.50th=[ 527], 99.90th=[ 542], 99.95th=[ 542], 00:21:15.698 | 99.99th=[ 542] 00:21:15.698 bw ( KiB/s): min= 6144, max=23855, per=3.91%, avg=10524.50, stdev=5161.68, samples=10 00:21:15.698 iops : min= 48, max= 186, avg=82.10, stdev=40.25, samples=10 00:21:15.698 write: IOPS=70, BW=9008KiB/s (9224kB/s)(48.4MiB/5499msec); 0 zone resets 00:21:15.698 slat (usec): min=11, max=145, avg=34.15, stdev=13.33 00:21:15.698 clat (msec): min=222, max=1305, avg=824.76, stdev=149.29 00:21:15.698 lat (msec): min=222, max=1305, avg=824.79, stdev=149.30 00:21:15.698 clat percentiles (msec): 00:21:15.698 | 1.00th=[ 275], 5.00th=[ 550], 10.00th=[ 667], 20.00th=[ 743], 00:21:15.698 | 30.00th=[ 818], 40.00th=[ 835], 50.00th=[ 844], 60.00th=[ 860], 00:21:15.698 | 70.00th=[ 869], 80.00th=[ 885], 90.00th=[ 902], 95.00th=[ 1083], 00:21:15.698 | 99.00th=[ 1267], 99.50th=[ 1284], 99.90th=[ 1301], 99.95th=[ 1301], 00:21:15.698 | 99.99th=[ 1301] 00:21:15.698 bw ( KiB/s): min= 3078, max= 9216, per=3.11%, avg=8395.60, stdev=1882.10, samples=10 00:21:15.698 iops : min= 24, max= 72, avg=65.50, stdev=14.71, samples=10 00:21:15.698 lat (msec) : 50=2.49%, 100=40.85%, 250=8.22%, 500=1.49%, 750=8.47% 00:21:15.698 lat (msec) : 1000=35.24%, 2000=3.24% 00:21:15.698 cpu : usr=0.22%, sys=0.44%, ctx=450, majf=0, minf=1 00:21:15.698 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=1.0%, 16=2.0%, 32=4.0%, >=64=92.2% 00:21:15.698 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:15.698 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:15.698 issued rwts: total=416,387,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:15.698 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:15.698 job10: (groupid=0, jobs=1): err= 0: pid=80814: Tue Jul 23 02:16:23 2024 00:21:15.698 read: IOPS=74, BW=9503KiB/s (9731kB/s)(51.4MiB/5536msec) 00:21:15.698 slat (nsec): min=10249, max=84393, avg=28169.66, stdev=12257.28 00:21:15.698 clat (msec): min=10, max=545, avg=66.04, stdev=38.31 00:21:15.698 lat (msec): min=10, max=545, avg=66.07, stdev=38.31 00:21:15.698 clat percentiles (msec): 00:21:15.698 | 1.00th=[ 40], 5.00th=[ 46], 10.00th=[ 53], 20.00th=[ 55], 00:21:15.698 | 30.00th=[ 56], 40.00th=[ 57], 50.00th=[ 57], 60.00th=[ 58], 00:21:15.698 | 70.00th=[ 59], 80.00th=[ 62], 90.00th=[ 84], 95.00th=[ 129], 00:21:15.698 | 99.00th=[ 224], 99.50th=[ 234], 99.90th=[ 550], 99.95th=[ 550], 00:21:15.698 | 99.99th=[ 550] 00:21:15.698 bw ( KiB/s): min= 6656, max=15360, per=3.90%, avg=10494.40, stdev=3127.65, samples=10 00:21:15.698 iops : min= 52, max= 120, avg=81.90, stdev=24.52, samples=10 00:21:15.698 write: IOPS=70, BW=9017KiB/s (9234kB/s)(48.8MiB/5536msec); 0 zone resets 00:21:15.698 slat (usec): min=10, max=4031, avg=47.30, stdev=202.67 00:21:15.698 clat (msec): min=197, max=1268, avg=836.68, stdev=138.81 00:21:15.698 lat (msec): min=201, max=1268, avg=836.73, stdev=138.77 00:21:15.698 clat percentiles (msec): 00:21:15.698 | 1.00th=[ 279], 5.00th=[ 575], 10.00th=[ 693], 20.00th=[ 810], 00:21:15.698 | 30.00th=[ 827], 40.00th=[ 835], 50.00th=[ 844], 60.00th=[ 852], 00:21:15.698 | 70.00th=[ 869], 80.00th=[ 885], 90.00th=[ 919], 95.00th=[ 1070], 00:21:15.698 | 99.00th=[ 1267], 99.50th=[ 1267], 99.90th=[ 1267], 99.95th=[ 1267], 00:21:15.698 | 99.99th=[ 1267] 00:21:15.698 bw ( KiB/s): min= 3072, max= 9472, per=3.11%, avg=8395.00, stdev=1895.27, samples=10 00:21:15.698 iops : min= 24, max= 74, avg=65.50, stdev=14.78, samples=10 00:21:15.698 lat (msec) : 20=0.50%, 50=3.00%, 100=43.57%, 250=4.24%, 500=1.37% 00:21:15.698 lat (msec) : 750=4.37%, 1000=39.70%, 2000=3.25% 00:21:15.698 cpu : usr=0.07%, sys=0.60%, ctx=434, majf=0, minf=1 00:21:15.698 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=1.0%, 16=2.0%, 32=4.0%, >=64=92.1% 00:21:15.698 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:15.698 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:15.698 issued rwts: total=411,390,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:15.698 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:15.698 job11: (groupid=0, jobs=1): err= 0: pid=80850: Tue Jul 23 02:16:23 2024 00:21:15.698 read: IOPS=65, BW=8440KiB/s (8643kB/s)(45.2MiB/5490msec) 00:21:15.698 slat (usec): min=8, max=232, avg=27.52, stdev=17.74 00:21:15.698 clat (msec): min=40, max=496, avg=75.85, stdev=48.05 00:21:15.698 lat (msec): min=41, max=496, avg=75.88, stdev=48.04 00:21:15.698 clat percentiles (msec): 00:21:15.698 | 1.00th=[ 43], 5.00th=[ 52], 10.00th=[ 54], 20.00th=[ 55], 00:21:15.699 | 30.00th=[ 56], 40.00th=[ 57], 50.00th=[ 58], 60.00th=[ 59], 00:21:15.699 | 70.00th=[ 61], 80.00th=[ 65], 90.00th=[ 142], 95.00th=[ 192], 00:21:15.699 | 99.00th=[ 234], 99.50th=[ 241], 99.90th=[ 498], 99.95th=[ 498], 00:21:15.699 | 99.99th=[ 498] 00:21:15.699 bw ( KiB/s): min= 5120, max=18944, per=3.43%, avg=9239.30, stdev=4075.78, samples=10 00:21:15.699 iops : min= 40, max= 148, avg=72.10, stdev=31.79, samples=10 00:21:15.699 write: IOPS=71, BW=9140KiB/s (9359kB/s)(49.0MiB/5490msec); 0 zone resets 00:21:15.699 slat (usec): min=9, max=479, avg=34.97, stdev=32.22 00:21:15.699 clat (msec): min=215, max=1300, avg=824.84, stdev=147.88 00:21:15.699 lat (msec): min=215, max=1301, avg=824.87, stdev=147.89 00:21:15.699 clat percentiles (msec): 00:21:15.699 | 1.00th=[ 296], 5.00th=[ 542], 10.00th=[ 634], 20.00th=[ 785], 00:21:15.699 | 30.00th=[ 818], 40.00th=[ 835], 50.00th=[ 852], 60.00th=[ 860], 00:21:15.699 | 70.00th=[ 877], 80.00th=[ 885], 90.00th=[ 919], 95.00th=[ 1062], 00:21:15.699 | 99.00th=[ 1267], 99.50th=[ 1267], 99.90th=[ 1301], 99.95th=[ 1301], 00:21:15.699 | 99.99th=[ 1301] 00:21:15.699 bw ( KiB/s): min= 3584, max= 9216, per=3.13%, avg=8446.10, stdev=1726.94, samples=10 00:21:15.699 iops : min= 28, max= 72, avg=65.90, stdev=13.45, samples=10 00:21:15.699 lat (msec) : 50=1.59%, 100=38.59%, 250=7.96%, 500=1.86%, 750=7.43% 00:21:15.699 lat (msec) : 1000=39.26%, 2000=3.32% 00:21:15.699 cpu : usr=0.22%, sys=0.31%, ctx=465, majf=0, minf=1 00:21:15.699 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.1%, 16=2.1%, 32=4.2%, >=64=91.6% 00:21:15.699 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:15.699 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:15.699 issued rwts: total=362,392,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:15.699 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:15.699 job12: (groupid=0, jobs=1): err= 0: pid=80865: Tue Jul 23 02:16:23 2024 00:21:15.699 read: IOPS=78, BW=9.87MiB/s (10.4MB/s)(54.2MiB/5495msec) 00:21:15.699 slat (usec): min=8, max=877, avg=31.92, stdev=59.14 00:21:15.699 clat (msec): min=36, max=522, avg=73.39, stdev=52.64 00:21:15.699 lat (msec): min=36, max=522, avg=73.43, stdev=52.64 00:21:15.699 clat percentiles (msec): 00:21:15.699 | 1.00th=[ 42], 5.00th=[ 47], 10.00th=[ 54], 20.00th=[ 55], 00:21:15.699 | 30.00th=[ 56], 40.00th=[ 57], 50.00th=[ 58], 60.00th=[ 58], 00:21:15.699 | 70.00th=[ 60], 80.00th=[ 63], 90.00th=[ 134], 95.00th=[ 186], 00:21:15.699 | 99.00th=[ 226], 99.50th=[ 510], 99.90th=[ 523], 99.95th=[ 523], 00:21:15.699 | 99.99th=[ 523] 00:21:15.699 bw ( KiB/s): min= 6387, max=19456, per=4.09%, avg=11006.70, stdev=3646.27, samples=10 00:21:15.699 iops : min= 49, max= 152, avg=85.90, stdev=28.61, samples=10 00:21:15.699 write: IOPS=70, BW=9085KiB/s (9303kB/s)(48.8MiB/5495msec); 0 zone resets 00:21:15.699 slat (usec): min=9, max=530, avg=41.90, stdev=48.36 00:21:15.699 clat (msec): min=217, max=1313, avg=818.63, stdev=143.36 00:21:15.699 lat (msec): min=217, max=1313, avg=818.67, stdev=143.37 00:21:15.699 clat percentiles (msec): 00:21:15.699 | 1.00th=[ 309], 5.00th=[ 558], 10.00th=[ 659], 20.00th=[ 793], 00:21:15.699 | 30.00th=[ 818], 40.00th=[ 827], 50.00th=[ 835], 60.00th=[ 844], 00:21:15.699 | 70.00th=[ 852], 80.00th=[ 869], 90.00th=[ 885], 95.00th=[ 1028], 00:21:15.699 | 99.00th=[ 1250], 99.50th=[ 1301], 99.90th=[ 1318], 99.95th=[ 1318], 00:21:15.699 | 99.99th=[ 1318] 00:21:15.699 bw ( KiB/s): min= 3328, max= 9472, per=3.13%, avg=8446.10, stdev=1821.34, samples=10 00:21:15.699 iops : min= 26, max= 74, avg=65.90, stdev=14.19, samples=10 00:21:15.699 lat (msec) : 50=3.40%, 100=42.72%, 250=6.43%, 500=1.46%, 750=7.04% 00:21:15.699 lat (msec) : 1000=36.17%, 2000=2.79% 00:21:15.699 cpu : usr=0.11%, sys=0.47%, ctx=499, majf=0, minf=1 00:21:15.699 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=1.0%, 16=1.9%, 32=3.9%, >=64=92.4% 00:21:15.699 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:15.699 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:15.699 issued rwts: total=434,390,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:15.699 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:15.699 job13: (groupid=0, jobs=1): err= 0: pid=80915: Tue Jul 23 02:16:23 2024 00:21:15.699 read: IOPS=74, BW=9554KiB/s (9784kB/s)(51.6MiB/5533msec) 00:21:15.699 slat (usec): min=7, max=240, avg=29.14, stdev=16.91 00:21:15.699 clat (usec): min=1107, max=567216, avg=73880.78, stdev=59818.57 00:21:15.699 lat (usec): min=1157, max=567249, avg=73909.92, stdev=59816.72 00:21:15.699 clat percentiles (msec): 00:21:15.699 | 1.00th=[ 6], 5.00th=[ 44], 10.00th=[ 54], 20.00th=[ 55], 00:21:15.699 | 30.00th=[ 56], 40.00th=[ 57], 50.00th=[ 58], 60.00th=[ 59], 00:21:15.699 | 70.00th=[ 60], 80.00th=[ 62], 90.00th=[ 136], 95.00th=[ 192], 00:21:15.699 | 99.00th=[ 249], 99.50th=[ 542], 99.90th=[ 567], 99.95th=[ 567], 00:21:15.699 | 99.99th=[ 567] 00:21:15.699 bw ( KiB/s): min= 5888, max=17920, per=3.90%, avg=10493.50, stdev=3342.89, samples=10 00:21:15.699 iops : min= 46, max= 140, avg=81.90, stdev=26.07, samples=10 00:21:15.699 write: IOPS=70, BW=9045KiB/s (9262kB/s)(48.9MiB/5533msec); 0 zone resets 00:21:15.699 slat (usec): min=10, max=993, avg=37.92, stdev=51.77 00:21:15.699 clat (msec): min=11, max=1334, avg=825.99, stdev=157.28 00:21:15.699 lat (msec): min=11, max=1334, avg=826.03, stdev=157.28 00:21:15.699 clat percentiles (msec): 00:21:15.699 | 1.00th=[ 178], 5.00th=[ 558], 10.00th=[ 684], 20.00th=[ 793], 00:21:15.699 | 30.00th=[ 818], 40.00th=[ 827], 50.00th=[ 844], 60.00th=[ 852], 00:21:15.699 | 70.00th=[ 869], 80.00th=[ 877], 90.00th=[ 902], 95.00th=[ 1083], 00:21:15.699 | 99.00th=[ 1284], 99.50th=[ 1318], 99.90th=[ 1334], 99.95th=[ 1334], 00:21:15.699 | 99.99th=[ 1334] 00:21:15.699 bw ( KiB/s): min= 3584, max= 9472, per=3.14%, avg=8471.80, stdev=1745.97, samples=10 00:21:15.699 iops : min= 28, max= 74, avg=66.10, stdev=13.62, samples=10 00:21:15.699 lat (msec) : 2=0.25%, 10=1.00%, 20=0.37%, 50=1.74%, 100=41.92% 00:21:15.699 lat (msec) : 250=6.09%, 500=1.37%, 750=5.47%, 1000=38.68%, 2000=3.11% 00:21:15.699 cpu : usr=0.20%, sys=0.40%, ctx=459, majf=0, minf=1 00:21:15.699 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=1.0%, 16=2.0%, 32=4.0%, >=64=92.2% 00:21:15.699 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:15.699 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:15.699 issued rwts: total=413,391,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:15.699 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:15.699 job14: (groupid=0, jobs=1): err= 0: pid=80925: Tue Jul 23 02:16:23 2024 00:21:15.699 read: IOPS=67, BW=8583KiB/s (8789kB/s)(46.1MiB/5503msec) 00:21:15.699 slat (usec): min=8, max=620, avg=37.79, stdev=54.67 00:21:15.699 clat (msec): min=41, max=542, avg=73.72, stdev=59.74 00:21:15.699 lat (msec): min=41, max=542, avg=73.75, stdev=59.73 00:21:15.699 clat percentiles (msec): 00:21:15.699 | 1.00th=[ 42], 5.00th=[ 54], 10.00th=[ 54], 20.00th=[ 55], 00:21:15.699 | 30.00th=[ 56], 40.00th=[ 57], 50.00th=[ 57], 60.00th=[ 58], 00:21:15.699 | 70.00th=[ 59], 80.00th=[ 62], 90.00th=[ 110], 95.00th=[ 186], 00:21:15.699 | 99.00th=[ 518], 99.50th=[ 518], 99.90th=[ 542], 99.95th=[ 542], 00:21:15.699 | 99.99th=[ 542] 00:21:15.699 bw ( KiB/s): min= 6912, max=13487, per=3.46%, avg=9308.40, stdev=1979.10, samples=10 00:21:15.699 iops : min= 54, max= 105, avg=72.60, stdev=15.37, samples=10 00:21:15.699 write: IOPS=70, BW=9002KiB/s (9218kB/s)(48.4MiB/5503msec); 0 zone resets 00:21:15.699 slat (usec): min=13, max=641, avg=39.05, stdev=47.46 00:21:15.699 clat (msec): min=232, max=1336, avg=838.13, stdev=139.25 00:21:15.699 lat (msec): min=232, max=1336, avg=838.17, stdev=139.25 00:21:15.699 clat percentiles (msec): 00:21:15.699 | 1.00th=[ 321], 5.00th=[ 558], 10.00th=[ 709], 20.00th=[ 818], 00:21:15.699 | 30.00th=[ 827], 40.00th=[ 835], 50.00th=[ 844], 60.00th=[ 852], 00:21:15.699 | 70.00th=[ 869], 80.00th=[ 885], 90.00th=[ 919], 95.00th=[ 1036], 00:21:15.699 | 99.00th=[ 1301], 99.50th=[ 1318], 99.90th=[ 1334], 99.95th=[ 1334], 00:21:15.699 | 99.99th=[ 1334] 00:21:15.699 bw ( KiB/s): min= 2799, max= 9472, per=3.11%, avg=8393.30, stdev=1983.58, samples=10 00:21:15.699 iops : min= 21, max= 74, avg=65.40, stdev=15.76, samples=10 00:21:15.699 lat (msec) : 50=1.19%, 100=42.46%, 250=4.76%, 500=1.32%, 750=4.89% 00:21:15.699 lat (msec) : 1000=42.33%, 2000=3.04% 00:21:15.699 cpu : usr=0.16%, sys=0.44%, ctx=477, majf=0, minf=1 00:21:15.699 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.1%, 16=2.1%, 32=4.2%, >=64=91.7% 00:21:15.699 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:15.699 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:15.699 issued rwts: total=369,387,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:15.699 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:15.699 job15: (groupid=0, jobs=1): err= 0: pid=80961: Tue Jul 23 02:16:23 2024 00:21:15.699 read: IOPS=65, BW=8356KiB/s (8557kB/s)(44.9MiB/5499msec) 00:21:15.699 slat (usec): min=8, max=527, avg=36.77, stdev=57.95 00:21:15.699 clat (msec): min=42, max=509, avg=74.48, stdev=50.28 00:21:15.699 lat (msec): min=42, max=509, avg=74.52, stdev=50.27 00:21:15.699 clat percentiles (msec): 00:21:15.699 | 1.00th=[ 43], 5.00th=[ 53], 10.00th=[ 54], 20.00th=[ 56], 00:21:15.699 | 30.00th=[ 56], 40.00th=[ 57], 50.00th=[ 58], 60.00th=[ 59], 00:21:15.699 | 70.00th=[ 60], 80.00th=[ 63], 90.00th=[ 138], 95.00th=[ 188], 00:21:15.699 | 99.00th=[ 209], 99.50th=[ 510], 99.90th=[ 510], 99.95th=[ 510], 00:21:15.699 | 99.99th=[ 510] 00:21:15.699 bw ( KiB/s): min= 6144, max=15872, per=3.40%, avg=9139.20, stdev=3009.86, samples=10 00:21:15.699 iops : min= 48, max= 124, avg=71.40, stdev=23.51, samples=10 00:21:15.699 write: IOPS=70, BW=9078KiB/s (9296kB/s)(48.8MiB/5499msec); 0 zone resets 00:21:15.699 slat (usec): min=9, max=646, avg=42.48, stdev=60.52 00:21:15.699 clat (msec): min=222, max=1314, avg=832.33, stdev=143.59 00:21:15.699 lat (msec): min=222, max=1314, avg=832.37, stdev=143.59 00:21:15.699 clat percentiles (msec): 00:21:15.699 | 1.00th=[ 296], 5.00th=[ 550], 10.00th=[ 684], 20.00th=[ 793], 00:21:15.699 | 30.00th=[ 818], 40.00th=[ 835], 50.00th=[ 844], 60.00th=[ 860], 00:21:15.699 | 70.00th=[ 869], 80.00th=[ 894], 90.00th=[ 919], 95.00th=[ 1070], 00:21:15.699 | 99.00th=[ 1267], 99.50th=[ 1284], 99.90th=[ 1318], 99.95th=[ 1318], 00:21:15.699 | 99.99th=[ 1318] 00:21:15.699 bw ( KiB/s): min= 3072, max= 9472, per=3.12%, avg=8422.40, stdev=1898.36, samples=10 00:21:15.700 iops : min= 24, max= 74, avg=65.80, stdev=14.83, samples=10 00:21:15.700 lat (msec) : 50=1.20%, 100=39.65%, 250=7.08%, 500=1.34%, 750=6.14% 00:21:15.700 lat (msec) : 1000=41.66%, 2000=2.94% 00:21:15.700 cpu : usr=0.11%, sys=0.38%, ctx=575, majf=0, minf=1 00:21:15.700 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.1%, 16=2.1%, 32=4.3%, >=64=91.6% 00:21:15.700 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:15.700 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:15.700 issued rwts: total=359,390,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:15.700 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:15.700 job16: (groupid=0, jobs=1): err= 0: pid=80962: Tue Jul 23 02:16:23 2024 00:21:15.700 read: IOPS=75, BW=9727KiB/s (9960kB/s)(52.4MiB/5514msec) 00:21:15.700 slat (usec): min=8, max=387, avg=27.77, stdev=32.51 00:21:15.700 clat (msec): min=22, max=541, avg=75.13, stdev=62.15 00:21:15.700 lat (msec): min=22, max=541, avg=75.16, stdev=62.15 00:21:15.700 clat percentiles (msec): 00:21:15.700 | 1.00th=[ 34], 5.00th=[ 46], 10.00th=[ 54], 20.00th=[ 55], 00:21:15.700 | 30.00th=[ 56], 40.00th=[ 57], 50.00th=[ 58], 60.00th=[ 59], 00:21:15.700 | 70.00th=[ 61], 80.00th=[ 64], 90.00th=[ 116], 95.00th=[ 182], 00:21:15.700 | 99.00th=[ 514], 99.50th=[ 531], 99.90th=[ 542], 99.95th=[ 542], 00:21:15.700 | 99.99th=[ 542] 00:21:15.700 bw ( KiB/s): min= 7424, max=20480, per=3.94%, avg=10596.40, stdev=4034.00, samples=10 00:21:15.700 iops : min= 58, max= 160, avg=82.70, stdev=31.53, samples=10 00:21:15.700 write: IOPS=70, BW=8984KiB/s (9199kB/s)(48.4MiB/5514msec); 0 zone resets 00:21:15.700 slat (usec): min=11, max=501, avg=36.89, stdev=30.94 00:21:15.700 clat (msec): min=192, max=1322, avg=829.06, stdev=141.48 00:21:15.700 lat (msec): min=192, max=1322, avg=829.10, stdev=141.48 00:21:15.700 clat percentiles (msec): 00:21:15.700 | 1.00th=[ 284], 5.00th=[ 567], 10.00th=[ 693], 20.00th=[ 802], 00:21:15.700 | 30.00th=[ 827], 40.00th=[ 835], 50.00th=[ 844], 60.00th=[ 852], 00:21:15.700 | 70.00th=[ 860], 80.00th=[ 869], 90.00th=[ 902], 95.00th=[ 1053], 00:21:15.700 | 99.00th=[ 1284], 99.50th=[ 1301], 99.90th=[ 1318], 99.95th=[ 1318], 00:21:15.700 | 99.99th=[ 1318] 00:21:15.700 bw ( KiB/s): min= 255, max= 9216, per=2.84%, avg=7654.91, stdev=3037.93, samples=11 00:21:15.700 iops : min= 1, max= 72, avg=59.64, stdev=23.93, samples=11 00:21:15.700 lat (msec) : 50=3.10%, 100=41.94%, 250=6.45%, 500=1.36%, 750=6.20% 00:21:15.700 lat (msec) : 1000=38.09%, 2000=2.85% 00:21:15.700 cpu : usr=0.18%, sys=0.40%, ctx=497, majf=0, minf=1 00:21:15.700 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=1.0%, 16=2.0%, 32=4.0%, >=64=92.2% 00:21:15.700 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:15.700 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:15.700 issued rwts: total=419,387,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:15.700 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:15.700 job17: (groupid=0, jobs=1): err= 0: pid=80963: Tue Jul 23 02:16:23 2024 00:21:15.700 read: IOPS=66, BW=8476KiB/s (8679kB/s)(45.6MiB/5512msec) 00:21:15.700 slat (usec): min=9, max=102, avg=29.30, stdev=14.61 00:21:15.700 clat (msec): min=43, max=546, avg=73.45, stdev=48.97 00:21:15.700 lat (msec): min=43, max=546, avg=73.48, stdev=48.97 00:21:15.700 clat percentiles (msec): 00:21:15.700 | 1.00th=[ 44], 5.00th=[ 54], 10.00th=[ 54], 20.00th=[ 55], 00:21:15.700 | 30.00th=[ 56], 40.00th=[ 57], 50.00th=[ 58], 60.00th=[ 59], 00:21:15.700 | 70.00th=[ 60], 80.00th=[ 64], 90.00th=[ 136], 95.00th=[ 163], 00:21:15.700 | 99.00th=[ 224], 99.50th=[ 518], 99.90th=[ 550], 99.95th=[ 550], 00:21:15.700 | 99.99th=[ 550] 00:21:15.700 bw ( KiB/s): min= 6400, max=18650, per=3.45%, avg=9286.80, stdev=3628.67, samples=10 00:21:15.700 iops : min= 50, max= 145, avg=72.40, stdev=28.11, samples=10 00:21:15.700 write: IOPS=70, BW=9033KiB/s (9250kB/s)(48.6MiB/5512msec); 0 zone resets 00:21:15.700 slat (nsec): min=10049, max=84419, avg=37751.13, stdev=15188.11 00:21:15.700 clat (msec): min=225, max=1326, avg=836.37, stdev=140.96 00:21:15.700 lat (msec): min=225, max=1326, avg=836.40, stdev=140.96 00:21:15.700 clat percentiles (msec): 00:21:15.700 | 1.00th=[ 292], 5.00th=[ 558], 10.00th=[ 684], 20.00th=[ 810], 00:21:15.700 | 30.00th=[ 827], 40.00th=[ 844], 50.00th=[ 860], 60.00th=[ 869], 00:21:15.700 | 70.00th=[ 877], 80.00th=[ 885], 90.00th=[ 919], 95.00th=[ 1028], 00:21:15.700 | 99.00th=[ 1301], 99.50th=[ 1318], 99.90th=[ 1334], 99.95th=[ 1334], 00:21:15.700 | 99.99th=[ 1334] 00:21:15.700 bw ( KiB/s): min= 3065, max= 9216, per=3.11%, avg=8394.30, stdev=1889.77, samples=10 00:21:15.700 iops : min= 23, max= 72, avg=65.40, stdev=15.03, samples=10 00:21:15.700 lat (msec) : 50=1.19%, 100=40.05%, 250=7.16%, 500=1.33%, 750=5.97% 00:21:15.700 lat (msec) : 1000=41.25%, 2000=3.05% 00:21:15.700 cpu : usr=0.22%, sys=0.42%, ctx=430, majf=0, minf=1 00:21:15.700 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.1%, 16=2.1%, 32=4.2%, >=64=91.6% 00:21:15.700 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:15.700 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:15.700 issued rwts: total=365,389,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:15.700 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:15.700 job18: (groupid=0, jobs=1): err= 0: pid=80964: Tue Jul 23 02:16:23 2024 00:21:15.700 read: IOPS=69, BW=8901KiB/s (9115kB/s)(48.0MiB/5522msec) 00:21:15.700 slat (usec): min=9, max=243, avg=26.91, stdev=21.13 00:21:15.700 clat (msec): min=16, max=547, avg=68.22, stdev=47.17 00:21:15.700 lat (msec): min=16, max=547, avg=68.25, stdev=47.17 00:21:15.700 clat percentiles (msec): 00:21:15.700 | 1.00th=[ 24], 5.00th=[ 48], 10.00th=[ 54], 20.00th=[ 55], 00:21:15.700 | 30.00th=[ 56], 40.00th=[ 57], 50.00th=[ 57], 60.00th=[ 58], 00:21:15.700 | 70.00th=[ 60], 80.00th=[ 61], 90.00th=[ 90], 95.00th=[ 132], 00:21:15.700 | 99.00th=[ 241], 99.50th=[ 535], 99.90th=[ 550], 99.95th=[ 550], 00:21:15.700 | 99.99th=[ 550] 00:21:15.700 bw ( KiB/s): min= 6656, max=16160, per=3.64%, avg=9780.80, stdev=2558.18, samples=10 00:21:15.700 iops : min= 52, max= 126, avg=76.30, stdev=19.99, samples=10 00:21:15.700 write: IOPS=70, BW=8994KiB/s (9210kB/s)(48.5MiB/5522msec); 0 zone resets 00:21:15.700 slat (usec): min=9, max=1155, avg=33.91, stdev=61.65 00:21:15.700 clat (msec): min=216, max=1343, avg=841.60, stdev=144.19 00:21:15.700 lat (msec): min=216, max=1343, avg=841.63, stdev=144.19 00:21:15.700 clat percentiles (msec): 00:21:15.700 | 1.00th=[ 275], 5.00th=[ 575], 10.00th=[ 701], 20.00th=[ 818], 00:21:15.700 | 30.00th=[ 835], 40.00th=[ 844], 50.00th=[ 844], 60.00th=[ 852], 00:21:15.700 | 70.00th=[ 869], 80.00th=[ 894], 90.00th=[ 927], 95.00th=[ 1116], 00:21:15.700 | 99.00th=[ 1301], 99.50th=[ 1334], 99.90th=[ 1351], 99.95th=[ 1351], 00:21:15.700 | 99.99th=[ 1351] 00:21:15.700 bw ( KiB/s): min= 2821, max= 9453, per=3.11%, avg=8369.80, stdev=1965.67, samples=10 00:21:15.700 iops : min= 22, max= 73, avg=65.30, stdev=15.32, samples=10 00:21:15.700 lat (msec) : 20=0.26%, 50=2.72%, 100=42.36%, 250=4.40%, 500=1.17% 00:21:15.700 lat (msec) : 750=5.05%, 1000=40.93%, 2000=3.11% 00:21:15.700 cpu : usr=0.11%, sys=0.36%, ctx=495, majf=0, minf=1 00:21:15.700 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.0%, 16=2.1%, 32=4.1%, >=64=91.8% 00:21:15.700 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:15.700 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:15.700 issued rwts: total=384,388,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:15.700 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:15.700 job19: (groupid=0, jobs=1): err= 0: pid=80965: Tue Jul 23 02:16:23 2024 00:21:15.700 read: IOPS=79, BW=9.94MiB/s (10.4MB/s)(54.8MiB/5508msec) 00:21:15.700 slat (usec): min=9, max=251, avg=30.10, stdev=22.04 00:21:15.700 clat (msec): min=42, max=521, avg=73.50, stdev=51.20 00:21:15.700 lat (msec): min=42, max=521, avg=73.53, stdev=51.20 00:21:15.700 clat percentiles (msec): 00:21:15.700 | 1.00th=[ 44], 5.00th=[ 53], 10.00th=[ 55], 20.00th=[ 56], 00:21:15.700 | 30.00th=[ 56], 40.00th=[ 57], 50.00th=[ 57], 60.00th=[ 59], 00:21:15.700 | 70.00th=[ 60], 80.00th=[ 63], 90.00th=[ 127], 95.00th=[ 180], 00:21:15.700 | 99.00th=[ 226], 99.50th=[ 510], 99.90th=[ 523], 99.95th=[ 523], 00:21:15.700 | 99.99th=[ 523] 00:21:15.700 bw ( KiB/s): min= 7936, max=19339, per=4.13%, avg=11122.10, stdev=3152.79, samples=10 00:21:15.700 iops : min= 62, max= 151, avg=86.80, stdev=24.61, samples=10 00:21:15.700 write: IOPS=70, BW=9063KiB/s (9281kB/s)(48.8MiB/5508msec); 0 zone resets 00:21:15.700 slat (usec): min=10, max=464, avg=45.77, stdev=48.38 00:21:15.700 clat (msec): min=228, max=1312, avg=819.81, stdev=139.07 00:21:15.700 lat (msec): min=228, max=1312, avg=819.86, stdev=139.07 00:21:15.700 clat percentiles (msec): 00:21:15.700 | 1.00th=[ 300], 5.00th=[ 550], 10.00th=[ 684], 20.00th=[ 768], 00:21:15.700 | 30.00th=[ 810], 40.00th=[ 827], 50.00th=[ 844], 60.00th=[ 852], 00:21:15.700 | 70.00th=[ 860], 80.00th=[ 877], 90.00th=[ 894], 95.00th=[ 1028], 00:21:15.700 | 99.00th=[ 1267], 99.50th=[ 1318], 99.90th=[ 1318], 99.95th=[ 1318], 00:21:15.700 | 99.99th=[ 1318] 00:21:15.700 bw ( KiB/s): min= 3053, max= 9472, per=3.12%, avg=8418.70, stdev=1904.02, samples=10 00:21:15.700 iops : min= 23, max= 74, avg=65.60, stdev=15.13, samples=10 00:21:15.700 lat (msec) : 50=0.97%, 100=44.69%, 250=7.13%, 500=1.21%, 750=7.37% 00:21:15.700 lat (msec) : 1000=36.11%, 2000=2.54% 00:21:15.700 cpu : usr=0.22%, sys=0.44%, ctx=544, majf=0, minf=1 00:21:15.700 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=1.0%, 16=1.9%, 32=3.9%, >=64=92.4% 00:21:15.700 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:15.700 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:15.700 issued rwts: total=438,390,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:15.700 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:15.700 job20: (groupid=0, jobs=1): err= 0: pid=80966: Tue Jul 23 02:16:23 2024 00:21:15.700 read: IOPS=69, BW=8928KiB/s (9142kB/s)(47.9MiB/5491msec) 00:21:15.700 slat (usec): min=10, max=236, avg=31.16, stdev=16.83 00:21:15.700 clat (msec): min=41, max=534, avg=71.89, stdev=52.28 00:21:15.700 lat (msec): min=41, max=534, avg=71.92, stdev=52.28 00:21:15.700 clat percentiles (msec): 00:21:15.701 | 1.00th=[ 43], 5.00th=[ 53], 10.00th=[ 54], 20.00th=[ 55], 00:21:15.701 | 30.00th=[ 56], 40.00th=[ 56], 50.00th=[ 57], 60.00th=[ 58], 00:21:15.701 | 70.00th=[ 59], 80.00th=[ 62], 90.00th=[ 125], 95.00th=[ 165], 00:21:15.701 | 99.00th=[ 207], 99.50th=[ 523], 99.90th=[ 535], 99.95th=[ 535], 00:21:15.701 | 99.99th=[ 535] 00:21:15.701 bw ( KiB/s): min= 6400, max=14080, per=3.61%, avg=9700.60, stdev=2460.37, samples=10 00:21:15.701 iops : min= 50, max= 110, avg=75.70, stdev=19.25, samples=10 00:21:15.701 write: IOPS=71, BW=9091KiB/s (9309kB/s)(48.8MiB/5491msec); 0 zone resets 00:21:15.701 slat (usec): min=10, max=546, avg=40.08, stdev=29.96 00:21:15.701 clat (msec): min=213, max=1312, avg=828.98, stdev=140.98 00:21:15.701 lat (msec): min=213, max=1312, avg=829.02, stdev=140.98 00:21:15.701 clat percentiles (msec): 00:21:15.701 | 1.00th=[ 292], 5.00th=[ 535], 10.00th=[ 667], 20.00th=[ 802], 00:21:15.701 | 30.00th=[ 827], 40.00th=[ 835], 50.00th=[ 852], 60.00th=[ 852], 00:21:15.701 | 70.00th=[ 869], 80.00th=[ 877], 90.00th=[ 919], 95.00th=[ 995], 00:21:15.701 | 99.00th=[ 1267], 99.50th=[ 1284], 99.90th=[ 1318], 99.95th=[ 1318], 00:21:15.701 | 99.99th=[ 1318] 00:21:15.701 bw ( KiB/s): min= 3328, max= 9216, per=3.13%, avg=8446.10, stdev=1813.33, samples=10 00:21:15.701 iops : min= 26, max= 72, avg=65.90, stdev=14.13, samples=10 00:21:15.701 lat (msec) : 50=1.55%, 100=42.04%, 250=5.82%, 500=1.81%, 750=5.30% 00:21:15.701 lat (msec) : 1000=41.01%, 2000=2.46% 00:21:15.701 cpu : usr=0.18%, sys=0.51%, ctx=428, majf=0, minf=1 00:21:15.701 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.0%, 16=2.1%, 32=4.1%, >=64=91.8% 00:21:15.701 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:15.701 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:15.701 issued rwts: total=383,390,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:15.701 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:15.701 job21: (groupid=0, jobs=1): err= 0: pid=80967: Tue Jul 23 02:16:23 2024 00:21:15.701 read: IOPS=64, BW=8288KiB/s (8487kB/s)(44.5MiB/5498msec) 00:21:15.701 slat (nsec): min=8838, max=83630, avg=28110.11, stdev=14390.44 00:21:15.701 clat (msec): min=41, max=536, avg=70.02, stdev=41.47 00:21:15.701 lat (msec): min=41, max=536, avg=70.05, stdev=41.47 00:21:15.701 clat percentiles (msec): 00:21:15.701 | 1.00th=[ 44], 5.00th=[ 54], 10.00th=[ 55], 20.00th=[ 55], 00:21:15.701 | 30.00th=[ 56], 40.00th=[ 57], 50.00th=[ 57], 60.00th=[ 58], 00:21:15.701 | 70.00th=[ 59], 80.00th=[ 61], 90.00th=[ 121], 95.00th=[ 161], 00:21:15.701 | 99.00th=[ 209], 99.50th=[ 213], 99.90th=[ 535], 99.95th=[ 535], 00:21:15.701 | 99.99th=[ 535] 00:21:15.701 bw ( KiB/s): min= 5632, max=13082, per=3.38%, avg=9090.60, stdev=2499.05, samples=10 00:21:15.701 iops : min= 44, max= 102, avg=71.00, stdev=19.49, samples=10 00:21:15.701 write: IOPS=71, BW=9126KiB/s (9345kB/s)(49.0MiB/5498msec); 0 zone resets 00:21:15.701 slat (usec): min=10, max=295, avg=37.47, stdev=20.52 00:21:15.701 clat (msec): min=213, max=1314, avg=832.60, stdev=147.12 00:21:15.701 lat (msec): min=213, max=1314, avg=832.63, stdev=147.12 00:21:15.701 clat percentiles (msec): 00:21:15.701 | 1.00th=[ 305], 5.00th=[ 527], 10.00th=[ 676], 20.00th=[ 802], 00:21:15.701 | 30.00th=[ 818], 40.00th=[ 835], 50.00th=[ 844], 60.00th=[ 852], 00:21:15.701 | 70.00th=[ 877], 80.00th=[ 894], 90.00th=[ 919], 95.00th=[ 1083], 00:21:15.701 | 99.00th=[ 1267], 99.50th=[ 1301], 99.90th=[ 1318], 99.95th=[ 1318], 00:21:15.701 | 99.99th=[ 1318] 00:21:15.701 bw ( KiB/s): min= 3334, max= 9472, per=3.13%, avg=8448.60, stdev=1820.35, samples=10 00:21:15.701 iops : min= 26, max= 74, avg=66.00, stdev=14.24, samples=10 00:21:15.701 lat (msec) : 50=0.67%, 100=41.04%, 250=6.15%, 500=1.34%, 750=5.21% 00:21:15.701 lat (msec) : 1000=42.25%, 2000=3.34% 00:21:15.701 cpu : usr=0.09%, sys=0.42%, ctx=436, majf=0, minf=1 00:21:15.701 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.1%, 16=2.1%, 32=4.3%, >=64=91.6% 00:21:15.701 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:15.701 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:15.701 issued rwts: total=356,392,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:15.701 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:15.701 job22: (groupid=0, jobs=1): err= 0: pid=80968: Tue Jul 23 02:16:23 2024 00:21:15.701 read: IOPS=78, BW=9.78MiB/s (10.3MB/s)(54.0MiB/5522msec) 00:21:15.701 slat (usec): min=9, max=554, avg=28.05, stdev=28.93 00:21:15.701 clat (msec): min=5, max=549, avg=68.51, stdev=61.82 00:21:15.701 lat (msec): min=5, max=549, avg=68.54, stdev=61.82 00:21:15.701 clat percentiles (msec): 00:21:15.701 | 1.00th=[ 10], 5.00th=[ 42], 10.00th=[ 52], 20.00th=[ 55], 00:21:15.701 | 30.00th=[ 56], 40.00th=[ 56], 50.00th=[ 57], 60.00th=[ 58], 00:21:15.701 | 70.00th=[ 59], 80.00th=[ 61], 90.00th=[ 69], 95.00th=[ 138], 00:21:15.701 | 99.00th=[ 523], 99.50th=[ 535], 99.90th=[ 550], 99.95th=[ 550], 00:21:15.701 | 99.99th=[ 550] 00:21:15.701 bw ( KiB/s): min= 7936, max=15360, per=4.06%, avg=10929.20, stdev=1966.08, samples=10 00:21:15.701 iops : min= 62, max= 120, avg=85.30, stdev=15.42, samples=10 00:21:15.701 write: IOPS=70, BW=8994KiB/s (9210kB/s)(48.5MiB/5522msec); 0 zone resets 00:21:15.701 slat (usec): min=10, max=546, avg=36.69, stdev=30.15 00:21:15.701 clat (msec): min=93, max=1331, avg=832.97, stdev=144.34 00:21:15.701 lat (msec): min=93, max=1331, avg=833.01, stdev=144.35 00:21:15.701 clat percentiles (msec): 00:21:15.701 | 1.00th=[ 266], 5.00th=[ 567], 10.00th=[ 701], 20.00th=[ 810], 00:21:15.701 | 30.00th=[ 827], 40.00th=[ 835], 50.00th=[ 844], 60.00th=[ 852], 00:21:15.701 | 70.00th=[ 860], 80.00th=[ 877], 90.00th=[ 936], 95.00th=[ 1028], 00:21:15.701 | 99.00th=[ 1250], 99.50th=[ 1301], 99.90th=[ 1334], 99.95th=[ 1334], 00:21:15.701 | 99.99th=[ 1334] 00:21:15.701 bw ( KiB/s): min= 256, max= 9216, per=2.85%, avg=7678.36, stdev=3002.20, samples=11 00:21:15.701 iops : min= 2, max= 72, avg=59.91, stdev=23.42, samples=11 00:21:15.701 lat (msec) : 10=0.61%, 20=1.10%, 50=3.17%, 100=44.27%, 250=2.93% 00:21:15.701 lat (msec) : 500=1.59%, 750=4.63%, 1000=38.90%, 2000=2.80% 00:21:15.701 cpu : usr=0.29%, sys=0.36%, ctx=441, majf=0, minf=1 00:21:15.701 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=1.0%, 16=2.0%, 32=3.9%, >=64=92.3% 00:21:15.701 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:15.701 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:15.701 issued rwts: total=432,388,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:15.701 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:15.701 job23: (groupid=0, jobs=1): err= 0: pid=80969: Tue Jul 23 02:16:23 2024 00:21:15.701 read: IOPS=64, BW=8218KiB/s (8415kB/s)(44.2MiB/5514msec) 00:21:15.701 slat (usec): min=10, max=495, avg=35.56, stdev=33.93 00:21:15.701 clat (msec): min=41, max=543, avg=72.44, stdev=43.55 00:21:15.701 lat (msec): min=41, max=543, avg=72.48, stdev=43.55 00:21:15.701 clat percentiles (msec): 00:21:15.701 | 1.00th=[ 45], 5.00th=[ 53], 10.00th=[ 54], 20.00th=[ 56], 00:21:15.701 | 30.00th=[ 56], 40.00th=[ 57], 50.00th=[ 58], 60.00th=[ 59], 00:21:15.701 | 70.00th=[ 60], 80.00th=[ 67], 90.00th=[ 128], 95.00th=[ 169], 00:21:15.701 | 99.00th=[ 209], 99.50th=[ 236], 99.90th=[ 542], 99.95th=[ 542], 00:21:15.701 | 99.99th=[ 542] 00:21:15.701 bw ( KiB/s): min= 5376, max=17408, per=3.35%, avg=9009.50, stdev=3434.49, samples=10 00:21:15.701 iops : min= 42, max= 136, avg=70.30, stdev=26.85, samples=10 00:21:15.701 write: IOPS=70, BW=9053KiB/s (9271kB/s)(48.8MiB/5514msec); 0 zone resets 00:21:15.701 slat (usec): min=17, max=406, avg=45.33, stdev=25.24 00:21:15.701 clat (msec): min=230, max=1323, avg=837.19, stdev=144.14 00:21:15.701 lat (msec): min=230, max=1323, avg=837.23, stdev=144.14 00:21:15.701 clat percentiles (msec): 00:21:15.701 | 1.00th=[ 300], 5.00th=[ 558], 10.00th=[ 667], 20.00th=[ 810], 00:21:15.701 | 30.00th=[ 835], 40.00th=[ 844], 50.00th=[ 852], 60.00th=[ 860], 00:21:15.701 | 70.00th=[ 877], 80.00th=[ 894], 90.00th=[ 919], 95.00th=[ 1036], 00:21:15.701 | 99.00th=[ 1318], 99.50th=[ 1318], 99.90th=[ 1318], 99.95th=[ 1318], 00:21:15.701 | 99.99th=[ 1318] 00:21:15.701 bw ( KiB/s): min= 3072, max= 9216, per=3.11%, avg=8395.00, stdev=1887.57, samples=10 00:21:15.701 iops : min= 24, max= 72, avg=65.50, stdev=14.72, samples=10 00:21:15.701 lat (msec) : 50=1.21%, 100=39.92%, 250=6.59%, 500=1.34%, 750=6.45% 00:21:15.701 lat (msec) : 1000=41.13%, 2000=3.36% 00:21:15.701 cpu : usr=0.16%, sys=0.49%, ctx=520, majf=0, minf=1 00:21:15.702 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.1%, 16=2.2%, 32=4.3%, >=64=91.5% 00:21:15.702 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:15.702 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:15.702 issued rwts: total=354,390,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:15.702 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:15.702 job24: (groupid=0, jobs=1): err= 0: pid=80970: Tue Jul 23 02:16:23 2024 00:21:15.702 read: IOPS=75, BW=9680KiB/s (9913kB/s)(52.2MiB/5527msec) 00:21:15.702 slat (usec): min=7, max=2735, avg=43.60, stdev=141.53 00:21:15.702 clat (msec): min=15, max=553, avg=68.34, stdev=51.30 00:21:15.702 lat (msec): min=15, max=553, avg=68.38, stdev=51.32 00:21:15.702 clat percentiles (msec): 00:21:15.702 | 1.00th=[ 24], 5.00th=[ 46], 10.00th=[ 53], 20.00th=[ 55], 00:21:15.702 | 30.00th=[ 56], 40.00th=[ 56], 50.00th=[ 57], 60.00th=[ 58], 00:21:15.702 | 70.00th=[ 59], 80.00th=[ 61], 90.00th=[ 90], 95.00th=[ 131], 00:21:15.702 | 99.00th=[ 251], 99.50th=[ 527], 99.90th=[ 550], 99.95th=[ 550], 00:21:15.702 | 99.99th=[ 550] 00:21:15.702 bw ( KiB/s): min= 256, max=14592, per=3.59%, avg=9656.27, stdev=3645.48, samples=11 00:21:15.702 iops : min= 2, max= 114, avg=75.36, stdev=28.47, samples=11 00:21:15.702 write: IOPS=70, BW=8963KiB/s (9178kB/s)(48.4MiB/5527msec); 0 zone resets 00:21:15.702 slat (usec): min=9, max=1111, avg=51.95, stdev=80.37 00:21:15.702 clat (msec): min=202, max=1291, avg=838.55, stdev=137.83 00:21:15.702 lat (msec): min=202, max=1291, avg=838.60, stdev=137.83 00:21:15.702 clat percentiles (msec): 00:21:15.702 | 1.00th=[ 275], 5.00th=[ 575], 10.00th=[ 693], 20.00th=[ 818], 00:21:15.702 | 30.00th=[ 827], 40.00th=[ 835], 50.00th=[ 852], 60.00th=[ 852], 00:21:15.702 | 70.00th=[ 869], 80.00th=[ 877], 90.00th=[ 919], 95.00th=[ 1062], 00:21:15.702 | 99.00th=[ 1267], 99.50th=[ 1284], 99.90th=[ 1284], 99.95th=[ 1284], 00:21:15.702 | 99.99th=[ 1284] 00:21:15.702 bw ( KiB/s): min= 2816, max= 9216, per=3.11%, avg=8369.40, stdev=1971.52, samples=10 00:21:15.702 iops : min= 22, max= 72, avg=65.30, stdev=15.38, samples=10 00:21:15.702 lat (msec) : 20=0.25%, 50=3.35%, 100=44.10%, 250=3.85%, 500=1.37% 00:21:15.702 lat (msec) : 750=4.60%, 1000=39.50%, 2000=2.98% 00:21:15.702 cpu : usr=0.13%, sys=0.45%, ctx=580, majf=0, minf=1 00:21:15.702 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=1.0%, 16=2.0%, 32=4.0%, >=64=92.2% 00:21:15.702 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:15.702 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:15.702 issued rwts: total=418,387,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:15.702 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:15.702 job25: (groupid=0, jobs=1): err= 0: pid=80971: Tue Jul 23 02:16:23 2024 00:21:15.702 read: IOPS=72, BW=9245KiB/s (9467kB/s)(49.9MiB/5524msec) 00:21:15.702 slat (nsec): min=9762, max=85666, avg=21649.08, stdev=10757.37 00:21:15.702 clat (msec): min=37, max=534, avg=69.26, stdev=41.74 00:21:15.702 lat (msec): min=37, max=534, avg=69.28, stdev=41.74 00:21:15.702 clat percentiles (msec): 00:21:15.702 | 1.00th=[ 41], 5.00th=[ 47], 10.00th=[ 54], 20.00th=[ 55], 00:21:15.702 | 30.00th=[ 56], 40.00th=[ 57], 50.00th=[ 58], 60.00th=[ 59], 00:21:15.702 | 70.00th=[ 59], 80.00th=[ 62], 90.00th=[ 104], 95.00th=[ 180], 00:21:15.702 | 99.00th=[ 226], 99.50th=[ 230], 99.90th=[ 535], 99.95th=[ 535], 00:21:15.702 | 99.99th=[ 535] 00:21:15.702 bw ( KiB/s): min= 8192, max=15360, per=3.79%, avg=10186.90, stdev=2104.16, samples=10 00:21:15.702 iops : min= 64, max= 120, avg=79.50, stdev=16.47, samples=10 00:21:15.702 write: IOPS=70, BW=9037KiB/s (9254kB/s)(48.8MiB/5524msec); 0 zone resets 00:21:15.702 slat (usec): min=10, max=523, avg=30.89, stdev=27.59 00:21:15.702 clat (msec): min=235, max=1266, avg=834.04, stdev=140.09 00:21:15.702 lat (msec): min=236, max=1266, avg=834.07, stdev=140.08 00:21:15.702 clat percentiles (msec): 00:21:15.702 | 1.00th=[ 288], 5.00th=[ 558], 10.00th=[ 684], 20.00th=[ 810], 00:21:15.702 | 30.00th=[ 827], 40.00th=[ 835], 50.00th=[ 844], 60.00th=[ 860], 00:21:15.702 | 70.00th=[ 869], 80.00th=[ 877], 90.00th=[ 894], 95.00th=[ 1116], 00:21:15.702 | 99.00th=[ 1250], 99.50th=[ 1267], 99.90th=[ 1267], 99.95th=[ 1267], 00:21:15.702 | 99.99th=[ 1267] 00:21:15.702 bw ( KiB/s): min= 2816, max= 9472, per=3.11%, avg=8395.00, stdev=1977.99, samples=10 00:21:15.702 iops : min= 22, max= 74, avg=65.50, stdev=15.43, samples=10 00:21:15.702 lat (msec) : 50=3.17%, 100=42.08%, 250=5.45%, 500=1.14%, 750=4.56% 00:21:15.702 lat (msec) : 1000=40.43%, 2000=3.17% 00:21:15.702 cpu : usr=0.14%, sys=0.33%, ctx=465, majf=0, minf=1 00:21:15.702 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.0%, 16=2.0%, 32=4.1%, >=64=92.0% 00:21:15.702 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:15.702 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:15.702 issued rwts: total=399,390,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:15.702 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:15.702 job26: (groupid=0, jobs=1): err= 0: pid=80972: Tue Jul 23 02:16:23 2024 00:21:15.702 read: IOPS=63, BW=8150KiB/s (8345kB/s)(43.8MiB/5497msec) 00:21:15.702 slat (nsec): min=8702, max=66531, avg=26941.80, stdev=11660.22 00:21:15.702 clat (msec): min=41, max=518, avg=74.25, stdev=50.49 00:21:15.702 lat (msec): min=41, max=518, avg=74.28, stdev=50.49 00:21:15.702 clat percentiles (msec): 00:21:15.702 | 1.00th=[ 43], 5.00th=[ 53], 10.00th=[ 54], 20.00th=[ 55], 00:21:15.702 | 30.00th=[ 56], 40.00th=[ 57], 50.00th=[ 57], 60.00th=[ 58], 00:21:15.702 | 70.00th=[ 60], 80.00th=[ 63], 90.00th=[ 124], 95.00th=[ 182], 00:21:15.702 | 99.00th=[ 220], 99.50th=[ 506], 99.90th=[ 518], 99.95th=[ 518], 00:21:15.702 | 99.99th=[ 518] 00:21:15.702 bw ( KiB/s): min= 6656, max=17152, per=3.31%, avg=8907.20, stdev=3024.47, samples=10 00:21:15.702 iops : min= 52, max= 134, avg=69.50, stdev=23.67, samples=10 00:21:15.702 write: IOPS=71, BW=9105KiB/s (9323kB/s)(48.9MiB/5497msec); 0 zone resets 00:21:15.702 slat (nsec): min=10016, max=76566, avg=33988.46, stdev=12701.96 00:21:15.702 clat (msec): min=213, max=1354, avg=831.86, stdev=144.09 00:21:15.702 lat (msec): min=213, max=1354, avg=831.90, stdev=144.09 00:21:15.702 clat percentiles (msec): 00:21:15.702 | 1.00th=[ 284], 5.00th=[ 542], 10.00th=[ 693], 20.00th=[ 793], 00:21:15.702 | 30.00th=[ 818], 40.00th=[ 835], 50.00th=[ 852], 60.00th=[ 860], 00:21:15.702 | 70.00th=[ 877], 80.00th=[ 885], 90.00th=[ 919], 95.00th=[ 1053], 00:21:15.702 | 99.00th=[ 1284], 99.50th=[ 1334], 99.90th=[ 1351], 99.95th=[ 1351], 00:21:15.702 | 99.99th=[ 1351] 00:21:15.702 bw ( KiB/s): min= 3328, max= 9472, per=3.13%, avg=8446.20, stdev=1821.67, samples=10 00:21:15.702 iops : min= 26, max= 74, avg=65.90, stdev=14.21, samples=10 00:21:15.702 lat (msec) : 50=1.75%, 100=38.73%, 250=6.88%, 500=1.35%, 750=6.48% 00:21:15.702 lat (msec) : 1000=41.84%, 2000=2.97% 00:21:15.702 cpu : usr=0.25%, sys=0.36%, ctx=437, majf=0, minf=1 00:21:15.702 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.1%, 16=2.2%, 32=4.3%, >=64=91.5% 00:21:15.702 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:15.702 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:15.702 issued rwts: total=350,391,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:15.702 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:15.702 job27: (groupid=0, jobs=1): err= 0: pid=80973: Tue Jul 23 02:16:23 2024 00:21:15.702 read: IOPS=67, BW=8672KiB/s (8881kB/s)(46.8MiB/5520msec) 00:21:15.702 slat (usec): min=7, max=325, avg=29.93, stdev=31.84 00:21:15.702 clat (msec): min=16, max=567, avg=74.60, stdev=57.34 00:21:15.702 lat (msec): min=16, max=567, avg=74.63, stdev=57.34 00:21:15.702 clat percentiles (msec): 00:21:15.702 | 1.00th=[ 27], 5.00th=[ 51], 10.00th=[ 54], 20.00th=[ 55], 00:21:15.702 | 30.00th=[ 56], 40.00th=[ 57], 50.00th=[ 58], 60.00th=[ 59], 00:21:15.702 | 70.00th=[ 61], 80.00th=[ 64], 90.00th=[ 129], 95.00th=[ 163], 00:21:15.702 | 99.00th=[ 275], 99.50th=[ 550], 99.90th=[ 567], 99.95th=[ 567], 00:21:15.702 | 99.99th=[ 567] 00:21:15.702 bw ( KiB/s): min= 5376, max=18944, per=3.53%, avg=9495.90, stdev=3838.53, samples=10 00:21:15.702 iops : min= 42, max= 148, avg=74.10, stdev=30.02, samples=10 00:21:15.702 write: IOPS=70, BW=8974KiB/s (9189kB/s)(48.4MiB/5520msec); 0 zone resets 00:21:15.702 slat (usec): min=10, max=280, avg=42.62, stdev=29.03 00:21:15.702 clat (msec): min=219, max=1320, avg=839.02, stdev=140.03 00:21:15.702 lat (msec): min=219, max=1320, avg=839.07, stdev=140.03 00:21:15.702 clat percentiles (msec): 00:21:15.702 | 1.00th=[ 271], 5.00th=[ 584], 10.00th=[ 718], 20.00th=[ 810], 00:21:15.702 | 30.00th=[ 827], 40.00th=[ 844], 50.00th=[ 852], 60.00th=[ 860], 00:21:15.702 | 70.00th=[ 869], 80.00th=[ 877], 90.00th=[ 894], 95.00th=[ 1062], 00:21:15.702 | 99.00th=[ 1284], 99.50th=[ 1301], 99.90th=[ 1318], 99.95th=[ 1318], 00:21:15.702 | 99.99th=[ 1318] 00:21:15.702 bw ( KiB/s): min= 2816, max= 9216, per=3.11%, avg=8369.30, stdev=1967.51, samples=10 00:21:15.702 iops : min= 22, max= 72, avg=65.30, stdev=15.33, samples=10 00:21:15.702 lat (msec) : 20=0.26%, 50=2.10%, 100=39.95%, 250=6.31%, 500=1.58% 00:21:15.702 lat (msec) : 750=4.47%, 1000=42.18%, 2000=3.15% 00:21:15.702 cpu : usr=0.18%, sys=0.31%, ctx=738, majf=0, minf=1 00:21:15.702 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.1%, 16=2.1%, 32=4.2%, >=64=91.7% 00:21:15.702 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:15.702 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:15.702 issued rwts: total=374,387,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:15.702 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:15.702 job28: (groupid=0, jobs=1): err= 0: pid=80974: Tue Jul 23 02:16:23 2024 00:21:15.702 read: IOPS=78, BW=9.87MiB/s (10.3MB/s)(54.4MiB/5509msec) 00:21:15.702 slat (nsec): min=11993, max=92044, avg=25444.83, stdev=11278.53 00:21:15.702 clat (msec): min=39, max=539, avg=73.71, stdev=53.13 00:21:15.702 lat (msec): min=39, max=539, avg=73.73, stdev=53.13 00:21:15.702 clat percentiles (msec): 00:21:15.702 | 1.00th=[ 42], 5.00th=[ 50], 10.00th=[ 54], 20.00th=[ 56], 00:21:15.702 | 30.00th=[ 56], 40.00th=[ 57], 50.00th=[ 58], 60.00th=[ 58], 00:21:15.702 | 70.00th=[ 60], 80.00th=[ 63], 90.00th=[ 130], 95.00th=[ 178], 00:21:15.702 | 99.00th=[ 243], 99.50th=[ 527], 99.90th=[ 542], 99.95th=[ 542], 00:21:15.702 | 99.99th=[ 542] 00:21:15.702 bw ( KiB/s): min= 7680, max=19712, per=4.10%, avg=11031.30, stdev=3500.30, samples=10 00:21:15.703 iops : min= 60, max= 154, avg=86.10, stdev=27.34, samples=10 00:21:15.703 write: IOPS=70, BW=9015KiB/s (9231kB/s)(48.5MiB/5509msec); 0 zone resets 00:21:15.703 slat (usec): min=13, max=254, avg=35.02, stdev=17.68 00:21:15.703 clat (msec): min=230, max=1318, avg=824.48, stdev=140.37 00:21:15.703 lat (msec): min=231, max=1318, avg=824.51, stdev=140.37 00:21:15.703 clat percentiles (msec): 00:21:15.703 | 1.00th=[ 309], 5.00th=[ 584], 10.00th=[ 676], 20.00th=[ 793], 00:21:15.703 | 30.00th=[ 810], 40.00th=[ 827], 50.00th=[ 835], 60.00th=[ 852], 00:21:15.703 | 70.00th=[ 860], 80.00th=[ 869], 90.00th=[ 894], 95.00th=[ 1045], 00:21:15.703 | 99.00th=[ 1284], 99.50th=[ 1301], 99.90th=[ 1318], 99.95th=[ 1318], 00:21:15.703 | 99.99th=[ 1318] 00:21:15.703 bw ( KiB/s): min= 3072, max= 9216, per=3.11%, avg=8395.00, stdev=1891.70, samples=10 00:21:15.703 iops : min= 24, max= 72, avg=65.50, stdev=14.77, samples=10 00:21:15.703 lat (msec) : 50=2.92%, 100=42.77%, 250=7.05%, 500=1.09%, 750=6.80% 00:21:15.703 lat (msec) : 1000=36.45%, 2000=2.92% 00:21:15.703 cpu : usr=0.18%, sys=0.36%, ctx=470, majf=0, minf=1 00:21:15.703 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=1.0%, 16=1.9%, 32=3.9%, >=64=92.3% 00:21:15.703 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:15.703 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:15.703 issued rwts: total=435,388,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:15.703 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:15.703 job29: (groupid=0, jobs=1): err= 0: pid=80975: Tue Jul 23 02:16:23 2024 00:21:15.703 read: IOPS=78, BW=9.77MiB/s (10.2MB/s)(53.9MiB/5516msec) 00:21:15.703 slat (nsec): min=9193, max=74914, avg=26653.21, stdev=11584.31 00:21:15.703 clat (msec): min=10, max=547, avg=74.91, stdev=63.46 00:21:15.703 lat (msec): min=10, max=547, avg=74.94, stdev=63.46 00:21:15.703 clat percentiles (msec): 00:21:15.703 | 1.00th=[ 21], 5.00th=[ 46], 10.00th=[ 54], 20.00th=[ 55], 00:21:15.703 | 30.00th=[ 56], 40.00th=[ 57], 50.00th=[ 58], 60.00th=[ 59], 00:21:15.703 | 70.00th=[ 60], 80.00th=[ 65], 90.00th=[ 115], 95.00th=[ 194], 00:21:15.703 | 99.00th=[ 523], 99.50th=[ 523], 99.90th=[ 550], 99.95th=[ 550], 00:21:15.703 | 99.99th=[ 550] 00:21:15.703 bw ( KiB/s): min= 5632, max=22784, per=4.05%, avg=10905.60, stdev=4584.55, samples=10 00:21:15.703 iops : min= 44, max= 178, avg=85.20, stdev=35.82, samples=10 00:21:15.703 write: IOPS=69, BW=8957KiB/s (9172kB/s)(48.2MiB/5516msec); 0 zone resets 00:21:15.703 slat (usec): min=10, max=115, avg=35.05, stdev=13.34 00:21:15.703 clat (msec): min=198, max=1299, avg=829.31, stdev=139.74 00:21:15.703 lat (msec): min=198, max=1299, avg=829.34, stdev=139.75 00:21:15.703 clat percentiles (msec): 00:21:15.703 | 1.00th=[ 292], 5.00th=[ 575], 10.00th=[ 693], 20.00th=[ 793], 00:21:15.703 | 30.00th=[ 818], 40.00th=[ 835], 50.00th=[ 844], 60.00th=[ 852], 00:21:15.703 | 70.00th=[ 860], 80.00th=[ 877], 90.00th=[ 919], 95.00th=[ 1053], 00:21:15.703 | 99.00th=[ 1250], 99.50th=[ 1301], 99.90th=[ 1301], 99.95th=[ 1301], 00:21:15.703 | 99.99th=[ 1301] 00:21:15.703 bw ( KiB/s): min= 256, max= 9216, per=2.83%, avg=7633.45, stdev=3032.96, samples=11 00:21:15.703 iops : min= 2, max= 72, avg=59.64, stdev=23.70, samples=11 00:21:15.703 lat (msec) : 20=0.24%, 50=3.06%, 100=42.72%, 250=5.75%, 500=1.84% 00:21:15.703 lat (msec) : 750=5.63%, 1000=37.82%, 2000=2.94% 00:21:15.703 cpu : usr=0.27%, sys=0.38%, ctx=447, majf=0, minf=1 00:21:15.703 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=1.0%, 16=2.0%, 32=3.9%, >=64=92.3% 00:21:15.703 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:15.703 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:15.703 issued rwts: total=431,386,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:15.703 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:15.703 00:21:15.703 Run status group 0 (all jobs): 00:21:15.703 READ: bw=263MiB/s (275MB/s), 8004KiB/s-9.94MiB/s (8196kB/s-10.4MB/s), io=1456MiB (1526MB), run=5484-5542msec 00:21:15.703 WRITE: bw=263MiB/s (276MB/s), 8957KiB/s-9140KiB/s (9172kB/s-9359kB/s), io=1459MiB (1529MB), run=5484-5542msec 00:21:15.703 00:21:15.703 Disk stats (read/write): 00:21:15.703 sda: ios=406/377, merge=0/0, ticks=24021/306056, in_queue=330078, util=92.73% 00:21:15.703 sdb: ios=393/375, merge=0/0, ticks=24154/304928, in_queue=329082, util=92.45% 00:21:15.703 sdc: ios=450/374, merge=0/0, ticks=27380/300289, in_queue=327670, util=92.75% 00:21:15.703 sdd: ios=420/375, merge=0/0, ticks=24789/304453, in_queue=329242, util=93.20% 00:21:15.703 sde: ios=427/375, merge=0/0, ticks=27472/299944, in_queue=327417, util=92.74% 00:21:15.703 sdf: ios=449/375, merge=0/0, ticks=27146/301060, in_queue=328207, util=93.31% 00:21:15.703 sdg: ios=430/375, merge=0/0, ticks=24940/304789, in_queue=329729, util=94.26% 00:21:15.703 sdh: ios=392/376, merge=0/0, ticks=23741/306341, in_queue=330083, util=94.18% 00:21:15.703 sdi: ios=377/375, merge=0/0, ticks=24758/302851, in_queue=327609, util=93.70% 00:21:15.703 sdj: ios=421/375, merge=0/0, ticks=30275/298500, in_queue=328775, util=93.32% 00:21:15.703 sdk: ios=425/375, merge=0/0, ticks=26677/302643, in_queue=329321, util=93.97% 00:21:15.703 sdl: ios=376/374, merge=0/0, ticks=27032/299270, in_queue=326303, util=93.68% 00:21:15.703 sdm: ios=434/374, merge=0/0, ticks=30435/296534, in_queue=326969, util=93.72% 00:21:15.703 sdn: ios=413/379, merge=0/0, ticks=28966/301698, in_queue=330665, util=95.05% 00:21:15.703 sdo: ios=369/373, merge=0/0, ticks=25280/302698, in_queue=327979, util=94.35% 00:21:15.703 sdp: ios=359/373, merge=0/0, ticks=25755/301081, in_queue=326836, util=94.74% 00:21:15.703 sdq: ios=419/375, merge=0/0, ticks=29071/300405, in_queue=329476, util=95.56% 00:21:15.703 sdr: ios=365/374, merge=0/0, ticks=25832/302466, in_queue=328299, util=95.67% 00:21:15.703 sds: ios=384/375, merge=0/0, ticks=25188/304187, in_queue=329375, util=96.03% 00:21:15.703 sdt: ios=438/374, merge=0/0, ticks=30733/297091, in_queue=327824, util=95.95% 00:21:15.703 sdu: ios=383/374, merge=0/0, ticks=26094/300941, in_queue=327035, util=95.82% 00:21:15.703 sdv: ios=356/374, merge=0/0, ticks=24434/302723, in_queue=327158, util=96.16% 00:21:15.703 sdw: ios=432/377, merge=0/0, ticks=27164/303298, in_queue=330463, util=97.04% 00:21:15.703 sdx: ios=354/374, merge=0/0, ticks=25138/302921, in_queue=328060, util=96.71% 00:21:15.703 sdy: ios=418/375, merge=0/0, ticks=27020/303132, in_queue=330152, util=97.01% 00:21:15.703 sdz: ios=399/374, merge=0/0, ticks=27142/301878, in_queue=329020, util=97.06% 00:21:15.703 sdaa: ios=350/374, merge=0/0, ticks=25043/302055, in_queue=327098, util=96.66% 00:21:15.703 sdab: ios=374/375, merge=0/0, ticks=26322/303096, in_queue=329418, util=97.29% 00:21:15.703 sdac: ios=435/374, merge=0/0, ticks=30615/297709, in_queue=328324, util=97.27% 00:21:15.703 sdad: ios=431/375, merge=0/0, ticks=29880/300409, in_queue=330289, util=97.94% 00:21:15.703 [2024-07-23 02:16:23.782891] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:15.703 [2024-07-23 02:16:23.785372] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:15.703 [2024-07-23 02:16:23.787630] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:15.703 [2024-07-23 02:16:23.789932] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:15.703 [2024-07-23 02:16:23.792169] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:15.703 [2024-07-23 02:16:23.794503] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:15.703 [2024-07-23 02:16:23.796820] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:15.703 [2024-07-23 02:16:23.799044] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:15.703 [2024-07-23 02:16:23.801331] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:15.703 [2024-07-23 02:16:23.803518] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:15.703 [2024-07-23 02:16:23.806076] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:15.703 [2024-07-23 02:16:23.809094] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:15.703 02:16:23 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p iscsi -i 262144 -d 16 -t randwrite -r 10 00:21:15.703 [global] 00:21:15.703 thread=1 00:21:15.703 invalidate=1 00:21:15.703 rw=randwrite 00:21:15.703 time_based=1 00:21:15.703 runtime=10 00:21:15.703 ioengine=libaio 00:21:15.703 direct=1 00:21:15.703 bs=262144 00:21:15.703 iodepth=16 00:21:15.703 norandommap=1 00:21:15.703 numjobs=1 00:21:15.703 00:21:15.703 [job0] 00:21:15.703 filename=/dev/sda 00:21:15.703 [job1] 00:21:15.703 filename=/dev/sdb 00:21:15.703 [job2] 00:21:15.703 filename=/dev/sdc 00:21:15.703 [job3] 00:21:15.703 filename=/dev/sdd 00:21:15.703 [job4] 00:21:15.703 filename=/dev/sde 00:21:15.703 [job5] 00:21:15.703 filename=/dev/sdf 00:21:15.703 [job6] 00:21:15.703 filename=/dev/sdg 00:21:15.703 [job7] 00:21:15.703 filename=/dev/sdh 00:21:15.703 [job8] 00:21:15.703 filename=/dev/sdi 00:21:15.703 [job9] 00:21:15.703 filename=/dev/sdj 00:21:15.703 [job10] 00:21:15.703 filename=/dev/sdk 00:21:15.703 [job11] 00:21:15.703 filename=/dev/sdl 00:21:15.703 [job12] 00:21:15.703 filename=/dev/sdm 00:21:15.703 [job13] 00:21:15.703 filename=/dev/sdn 00:21:15.703 [job14] 00:21:15.703 filename=/dev/sdo 00:21:15.703 [job15] 00:21:15.703 filename=/dev/sdp 00:21:15.703 [job16] 00:21:15.703 filename=/dev/sdq 00:21:15.703 [job17] 00:21:15.703 filename=/dev/sdr 00:21:15.703 [job18] 00:21:15.703 filename=/dev/sds 00:21:15.703 [job19] 00:21:15.703 filename=/dev/sdt 00:21:15.703 [job20] 00:21:15.703 filename=/dev/sdu 00:21:15.703 [job21] 00:21:15.703 filename=/dev/sdv 00:21:15.703 [job22] 00:21:15.703 filename=/dev/sdw 00:21:15.703 [job23] 00:21:15.703 filename=/dev/sdx 00:21:15.703 [job24] 00:21:15.703 filename=/dev/sdy 00:21:15.703 [job25] 00:21:15.703 filename=/dev/sdz 00:21:15.703 [job26] 00:21:15.703 filename=/dev/sdaa 00:21:15.703 [job27] 00:21:15.703 filename=/dev/sdab 00:21:15.703 [job28] 00:21:15.703 filename=/dev/sdac 00:21:15.703 [job29] 00:21:15.703 filename=/dev/sdad 00:21:15.703 queue_depth set to 113 (sda) 00:21:15.703 queue_depth set to 113 (sdb) 00:21:15.704 queue_depth set to 113 (sdc) 00:21:15.704 queue_depth set to 113 (sdd) 00:21:15.704 queue_depth set to 113 (sde) 00:21:15.704 queue_depth set to 113 (sdf) 00:21:15.704 queue_depth set to 113 (sdg) 00:21:15.704 queue_depth set to 113 (sdh) 00:21:15.704 queue_depth set to 113 (sdi) 00:21:15.704 queue_depth set to 113 (sdj) 00:21:15.704 queue_depth set to 113 (sdk) 00:21:15.704 queue_depth set to 113 (sdl) 00:21:15.704 queue_depth set to 113 (sdm) 00:21:15.704 queue_depth set to 113 (sdn) 00:21:15.704 queue_depth set to 113 (sdo) 00:21:15.704 queue_depth set to 113 (sdp) 00:21:15.704 queue_depth set to 113 (sdq) 00:21:15.704 queue_depth set to 113 (sdr) 00:21:15.704 queue_depth set to 113 (sds) 00:21:15.704 queue_depth set to 113 (sdt) 00:21:15.704 queue_depth set to 113 (sdu) 00:21:15.704 queue_depth set to 113 (sdv) 00:21:15.704 queue_depth set to 113 (sdw) 00:21:15.704 queue_depth set to 113 (sdx) 00:21:15.704 queue_depth set to 113 (sdy) 00:21:15.704 queue_depth set to 113 (sdz) 00:21:15.704 queue_depth set to 113 (sdaa) 00:21:15.704 queue_depth set to 113 (sdab) 00:21:15.704 queue_depth set to 113 (sdac) 00:21:15.704 queue_depth set to 113 (sdad) 00:21:15.963 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:21:15.963 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:21:15.963 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:21:15.963 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:21:15.963 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:21:15.963 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:21:15.963 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:21:15.963 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:21:15.963 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:21:15.963 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:21:15.963 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:21:15.963 job11: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:21:15.963 job12: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:21:15.963 job13: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:21:15.963 job14: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:21:15.963 job15: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:21:15.963 job16: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:21:15.963 job17: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:21:15.963 job18: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:21:15.963 job19: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:21:15.963 job20: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:21:15.963 job21: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:21:15.963 job22: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:21:15.963 job23: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:21:15.963 job24: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:21:15.964 job25: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:21:15.964 job26: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:21:15.964 job27: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:21:15.964 job28: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:21:15.964 job29: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=16 00:21:15.964 fio-3.35 00:21:15.964 Starting 30 threads 00:21:15.964 [2024-07-23 02:16:24.619546] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:15.964 [2024-07-23 02:16:24.623614] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:15.964 [2024-07-23 02:16:24.627476] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:15.964 [2024-07-23 02:16:24.630388] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:15.964 [2024-07-23 02:16:24.633281] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:15.964 [2024-07-23 02:16:24.635777] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:15.964 [2024-07-23 02:16:24.638394] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:15.964 [2024-07-23 02:16:24.641123] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:15.964 [2024-07-23 02:16:24.643723] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:15.964 [2024-07-23 02:16:24.646265] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:15.964 [2024-07-23 02:16:24.648780] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:15.964 [2024-07-23 02:16:24.651447] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:15.964 [2024-07-23 02:16:24.654037] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:15.964 [2024-07-23 02:16:24.656579] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:15.964 [2024-07-23 02:16:24.659234] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:15.964 [2024-07-23 02:16:24.661741] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:15.964 [2024-07-23 02:16:24.664225] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:15.964 [2024-07-23 02:16:24.666786] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:15.964 [2024-07-23 02:16:24.669353] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:15.964 [2024-07-23 02:16:24.671993] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:15.964 [2024-07-23 02:16:24.674633] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:15.964 [2024-07-23 02:16:24.680626] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:15.964 [2024-07-23 02:16:24.683299] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:15.964 [2024-07-23 02:16:24.685913] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:15.964 [2024-07-23 02:16:24.688420] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:15.964 [2024-07-23 02:16:24.690992] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:15.964 [2024-07-23 02:16:24.693650] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:15.964 [2024-07-23 02:16:24.696140] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:15.964 [2024-07-23 02:16:24.698760] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:15.964 [2024-07-23 02:16:24.701378] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:28.208 [2024-07-23 02:16:35.532894] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:28.208 [2024-07-23 02:16:35.550785] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:28.208 [2024-07-23 02:16:35.554523] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:28.208 [2024-07-23 02:16:35.557322] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:28.208 [2024-07-23 02:16:35.559925] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:28.208 [2024-07-23 02:16:35.562460] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:28.208 [2024-07-23 02:16:35.564929] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:28.208 [2024-07-23 02:16:35.567315] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:28.208 [2024-07-23 02:16:35.569808] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:28.208 [2024-07-23 02:16:35.572303] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:28.208 [2024-07-23 02:16:35.574687] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:28.208 [2024-07-23 02:16:35.577045] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:28.208 [2024-07-23 02:16:35.579493] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:28.208 [2024-07-23 02:16:35.581950] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:28.208 [2024-07-23 02:16:35.584405] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:28.208 [2024-07-23 02:16:35.586877] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:28.208 [2024-07-23 02:16:35.589328] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:28.208 00:21:28.208 job0: (groupid=0, jobs=1): err= 0: pid=81478: Tue Jul 23 02:16:35 2024 00:21:28.208 write: IOPS=59, BW=14.9MiB/s (15.6MB/s)(153MiB/10241msec); 0 zone resets 00:21:28.208 slat (usec): min=19, max=127, avg=62.10, stdev=15.06 00:21:28.208 clat (msec): min=14, max=482, avg=267.71, stdev=30.96 00:21:28.208 lat (msec): min=14, max=482, avg=267.77, stdev=30.97 00:21:28.208 clat percentiles (msec): 00:21:28.208 | 1.00th=[ 115], 5.00th=[ 262], 10.00th=[ 264], 20.00th=[ 266], 00:21:28.208 | 30.00th=[ 268], 40.00th=[ 268], 50.00th=[ 271], 60.00th=[ 271], 00:21:28.208 | 70.00th=[ 271], 80.00th=[ 271], 90.00th=[ 271], 95.00th=[ 279], 00:21:28.208 | 99.00th=[ 380], 99.50th=[ 430], 99.90th=[ 485], 99.95th=[ 485], 00:21:28.208 | 99.99th=[ 485] 00:21:28.208 bw ( KiB/s): min=14818, max=15840, per=3.33%, avg=15249.85, stdev=264.87, samples=20 00:21:28.208 iops : min= 57, max= 61, avg=59.35, stdev= 1.04, samples=20 00:21:28.208 lat (msec) : 20=0.16%, 50=0.33%, 100=0.49%, 250=1.47%, 500=97.55% 00:21:28.208 cpu : usr=0.15%, sys=0.31%, ctx=612, majf=0, minf=1 00:21:28.208 IO depths : 1=0.2%, 2=0.3%, 4=0.7%, 8=1.3%, 16=97.5%, 32=0.0%, >=64=0.0% 00:21:28.208 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:28.208 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:28.208 issued rwts: total=0,611,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:28.208 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:28.208 job1: (groupid=0, jobs=1): err= 0: pid=81479: Tue Jul 23 02:16:35 2024 00:21:28.208 write: IOPS=59, BW=14.9MiB/s (15.6MB/s)(153MiB/10234msec); 0 zone resets 00:21:28.208 slat (usec): min=18, max=328, avg=73.12, stdev=41.92 00:21:28.208 clat (msec): min=28, max=477, avg=267.95, stdev=29.04 00:21:28.208 lat (msec): min=28, max=477, avg=268.02, stdev=29.05 00:21:28.208 clat percentiles (msec): 00:21:28.208 | 1.00th=[ 130], 5.00th=[ 262], 10.00th=[ 264], 20.00th=[ 266], 00:21:28.208 | 30.00th=[ 268], 40.00th=[ 268], 50.00th=[ 271], 60.00th=[ 271], 00:21:28.208 | 70.00th=[ 271], 80.00th=[ 271], 90.00th=[ 271], 95.00th=[ 279], 00:21:28.208 | 99.00th=[ 376], 99.50th=[ 426], 99.90th=[ 477], 99.95th=[ 477], 00:21:28.208 | 99.99th=[ 477] 00:21:28.208 bw ( KiB/s): min=14336, max=15840, per=3.33%, avg=15228.70, stdev=359.27, samples=20 00:21:28.208 iops : min= 56, max= 61, avg=59.35, stdev= 1.35, samples=20 00:21:28.208 lat (msec) : 50=0.33%, 100=0.49%, 250=1.64%, 500=97.54% 00:21:28.208 cpu : usr=0.11%, sys=0.37%, ctx=667, majf=0, minf=1 00:21:28.208 IO depths : 1=0.2%, 2=0.3%, 4=0.7%, 8=1.3%, 16=97.5%, 32=0.0%, >=64=0.0% 00:21:28.208 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:28.208 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:28.208 issued rwts: total=0,610,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:28.208 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:28.208 job2: (groupid=0, jobs=1): err= 0: pid=81487: Tue Jul 23 02:16:35 2024 00:21:28.208 write: IOPS=59, BW=14.9MiB/s (15.7MB/s)(153MiB/10248msec); 0 zone resets 00:21:28.208 slat (usec): min=24, max=305, avg=68.30, stdev=31.23 00:21:28.208 clat (msec): min=13, max=483, avg=267.44, stdev=32.12 00:21:28.208 lat (msec): min=13, max=483, avg=267.51, stdev=32.13 00:21:28.208 clat percentiles (msec): 00:21:28.208 | 1.00th=[ 104], 5.00th=[ 262], 10.00th=[ 264], 20.00th=[ 266], 00:21:28.208 | 30.00th=[ 268], 40.00th=[ 268], 50.00th=[ 271], 60.00th=[ 271], 00:21:28.208 | 70.00th=[ 271], 80.00th=[ 271], 90.00th=[ 271], 95.00th=[ 279], 00:21:28.208 | 99.00th=[ 384], 99.50th=[ 435], 99.90th=[ 485], 99.95th=[ 485], 00:21:28.208 | 99.99th=[ 485] 00:21:28.208 bw ( KiB/s): min=14818, max=15840, per=3.34%, avg=15276.95, stdev=248.83, samples=20 00:21:28.208 iops : min= 57, max= 61, avg=59.45, stdev= 1.00, samples=20 00:21:28.208 lat (msec) : 20=0.16%, 50=0.33%, 100=0.49%, 250=1.47%, 500=97.55% 00:21:28.208 cpu : usr=0.22%, sys=0.28%, ctx=650, majf=0, minf=1 00:21:28.208 IO depths : 1=0.2%, 2=0.3%, 4=0.7%, 8=1.3%, 16=97.5%, 32=0.0%, >=64=0.0% 00:21:28.208 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:28.208 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:28.208 issued rwts: total=0,612,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:28.208 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:28.208 job3: (groupid=0, jobs=1): err= 0: pid=81505: Tue Jul 23 02:16:35 2024 00:21:28.208 write: IOPS=60, BW=15.1MiB/s (15.8MB/s)(155MiB/10254msec); 0 zone resets 00:21:28.208 slat (usec): min=21, max=125, avg=54.25, stdev=10.81 00:21:28.208 clat (msec): min=8, max=488, avg=265.02, stdev=40.77 00:21:28.208 lat (msec): min=8, max=488, avg=265.07, stdev=40.78 00:21:28.208 clat percentiles (msec): 00:21:28.208 | 1.00th=[ 22], 5.00th=[ 262], 10.00th=[ 262], 20.00th=[ 266], 00:21:28.208 | 30.00th=[ 268], 40.00th=[ 268], 50.00th=[ 271], 60.00th=[ 271], 00:21:28.208 | 70.00th=[ 271], 80.00th=[ 271], 90.00th=[ 271], 95.00th=[ 279], 00:21:28.208 | 99.00th=[ 388], 99.50th=[ 439], 99.90th=[ 489], 99.95th=[ 489], 00:21:28.208 | 99.99th=[ 489] 00:21:28.208 bw ( KiB/s): min=14848, max=18432, per=3.37%, avg=15406.50, stdev=776.68, samples=20 00:21:28.208 iops : min= 58, max= 72, avg=60.05, stdev= 3.03, samples=20 00:21:28.208 lat (msec) : 10=0.32%, 20=0.65%, 50=0.49%, 100=0.49%, 250=1.62% 00:21:28.208 lat (msec) : 500=96.44% 00:21:28.208 cpu : usr=0.15%, sys=0.34%, ctx=620, majf=0, minf=1 00:21:28.208 IO depths : 1=0.2%, 2=0.3%, 4=0.6%, 8=1.3%, 16=97.6%, 32=0.0%, >=64=0.0% 00:21:28.208 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:28.208 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:28.208 issued rwts: total=0,618,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:28.208 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:28.208 job4: (groupid=0, jobs=1): err= 0: pid=81513: Tue Jul 23 02:16:35 2024 00:21:28.208 write: IOPS=59, BW=14.9MiB/s (15.6MB/s)(153MiB/10234msec); 0 zone resets 00:21:28.208 slat (usec): min=18, max=168, avg=61.07, stdev=14.13 00:21:28.208 clat (msec): min=29, max=477, avg=267.97, stdev=29.03 00:21:28.208 lat (msec): min=29, max=477, avg=268.03, stdev=29.04 00:21:28.208 clat percentiles (msec): 00:21:28.208 | 1.00th=[ 130], 5.00th=[ 262], 10.00th=[ 264], 20.00th=[ 266], 00:21:28.208 | 30.00th=[ 268], 40.00th=[ 268], 50.00th=[ 271], 60.00th=[ 271], 00:21:28.208 | 70.00th=[ 271], 80.00th=[ 271], 90.00th=[ 271], 95.00th=[ 279], 00:21:28.208 | 99.00th=[ 376], 99.50th=[ 426], 99.90th=[ 477], 99.95th=[ 477], 00:21:28.208 | 99.99th=[ 477] 00:21:28.208 bw ( KiB/s): min=14336, max=15840, per=3.33%, avg=15228.70, stdev=359.27, samples=20 00:21:28.208 iops : min= 56, max= 61, avg=59.35, stdev= 1.35, samples=20 00:21:28.208 lat (msec) : 50=0.33%, 100=0.49%, 250=1.64%, 500=97.54% 00:21:28.209 cpu : usr=0.22%, sys=0.28%, ctx=613, majf=0, minf=1 00:21:28.209 IO depths : 1=0.2%, 2=0.3%, 4=0.7%, 8=1.3%, 16=97.5%, 32=0.0%, >=64=0.0% 00:21:28.209 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:28.209 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:28.209 issued rwts: total=0,610,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:28.209 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:28.209 job5: (groupid=0, jobs=1): err= 0: pid=81514: Tue Jul 23 02:16:35 2024 00:21:28.209 write: IOPS=59, BW=14.9MiB/s (15.6MB/s)(153MiB/10229msec); 0 zone resets 00:21:28.209 slat (usec): min=15, max=173, avg=55.39, stdev=13.55 00:21:28.209 clat (msec): min=31, max=469, avg=267.85, stdev=28.13 00:21:28.209 lat (msec): min=31, max=469, avg=267.91, stdev=28.13 00:21:28.209 clat percentiles (msec): 00:21:28.209 | 1.00th=[ 133], 5.00th=[ 262], 10.00th=[ 264], 20.00th=[ 266], 00:21:28.209 | 30.00th=[ 268], 40.00th=[ 268], 50.00th=[ 271], 60.00th=[ 271], 00:21:28.209 | 70.00th=[ 271], 80.00th=[ 271], 90.00th=[ 271], 95.00th=[ 279], 00:21:28.209 | 99.00th=[ 368], 99.50th=[ 418], 99.90th=[ 468], 99.95th=[ 468], 00:21:28.209 | 99.99th=[ 468] 00:21:28.209 bw ( KiB/s): min=14336, max=15840, per=3.33%, avg=15227.35, stdev=325.11, samples=20 00:21:28.209 iops : min= 56, max= 61, avg=59.35, stdev= 1.27, samples=20 00:21:28.209 lat (msec) : 50=0.33%, 100=0.49%, 250=1.48%, 500=97.70% 00:21:28.209 cpu : usr=0.10%, sys=0.39%, ctx=613, majf=0, minf=1 00:21:28.209 IO depths : 1=0.2%, 2=0.3%, 4=0.7%, 8=1.3%, 16=97.5%, 32=0.0%, >=64=0.0% 00:21:28.209 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:28.209 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:28.209 issued rwts: total=0,610,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:28.209 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:28.209 job6: (groupid=0, jobs=1): err= 0: pid=81515: Tue Jul 23 02:16:35 2024 00:21:28.209 write: IOPS=59, BW=14.9MiB/s (15.6MB/s)(153MiB/10236msec); 0 zone resets 00:21:28.209 slat (usec): min=16, max=114, avg=55.30, stdev=11.32 00:21:28.209 clat (msec): min=29, max=479, avg=268.02, stdev=29.18 00:21:28.209 lat (msec): min=29, max=479, avg=268.07, stdev=29.19 00:21:28.209 clat percentiles (msec): 00:21:28.209 | 1.00th=[ 130], 5.00th=[ 262], 10.00th=[ 264], 20.00th=[ 266], 00:21:28.209 | 30.00th=[ 268], 40.00th=[ 268], 50.00th=[ 271], 60.00th=[ 271], 00:21:28.209 | 70.00th=[ 271], 80.00th=[ 271], 90.00th=[ 271], 95.00th=[ 279], 00:21:28.209 | 99.00th=[ 380], 99.50th=[ 430], 99.90th=[ 481], 99.95th=[ 481], 00:21:28.209 | 99.99th=[ 481] 00:21:28.209 bw ( KiB/s): min=14336, max=15840, per=3.33%, avg=15228.70, stdev=359.27, samples=20 00:21:28.209 iops : min= 56, max= 61, avg=59.35, stdev= 1.35, samples=20 00:21:28.209 lat (msec) : 50=0.33%, 100=0.49%, 250=1.64%, 500=97.54% 00:21:28.209 cpu : usr=0.14%, sys=0.33%, ctx=610, majf=0, minf=1 00:21:28.209 IO depths : 1=0.2%, 2=0.3%, 4=0.7%, 8=1.3%, 16=97.5%, 32=0.0%, >=64=0.0% 00:21:28.209 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:28.209 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:28.209 issued rwts: total=0,610,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:28.209 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:28.209 job7: (groupid=0, jobs=1): err= 0: pid=81516: Tue Jul 23 02:16:35 2024 00:21:28.209 write: IOPS=59, BW=14.9MiB/s (15.6MB/s)(153MiB/10234msec); 0 zone resets 00:21:28.209 slat (usec): min=18, max=129, avg=56.53, stdev=11.98 00:21:28.209 clat (msec): min=27, max=478, avg=267.96, stdev=29.30 00:21:28.209 lat (msec): min=27, max=478, avg=268.02, stdev=29.30 00:21:28.209 clat percentiles (msec): 00:21:28.209 | 1.00th=[ 128], 5.00th=[ 262], 10.00th=[ 264], 20.00th=[ 266], 00:21:28.209 | 30.00th=[ 268], 40.00th=[ 268], 50.00th=[ 271], 60.00th=[ 271], 00:21:28.209 | 70.00th=[ 271], 80.00th=[ 271], 90.00th=[ 271], 95.00th=[ 279], 00:21:28.209 | 99.00th=[ 376], 99.50th=[ 430], 99.90th=[ 481], 99.95th=[ 481], 00:21:28.209 | 99.99th=[ 481] 00:21:28.209 bw ( KiB/s): min=14336, max=15840, per=3.33%, avg=15227.30, stdev=322.60, samples=20 00:21:28.209 iops : min= 56, max= 61, avg=59.35, stdev= 1.18, samples=20 00:21:28.209 lat (msec) : 50=0.33%, 100=0.49%, 250=1.64%, 500=97.54% 00:21:28.209 cpu : usr=0.17%, sys=0.33%, ctx=613, majf=0, minf=1 00:21:28.209 IO depths : 1=0.2%, 2=0.3%, 4=0.7%, 8=1.3%, 16=97.5%, 32=0.0%, >=64=0.0% 00:21:28.209 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:28.209 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:28.209 issued rwts: total=0,610,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:28.209 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:28.209 job8: (groupid=0, jobs=1): err= 0: pid=81517: Tue Jul 23 02:16:35 2024 00:21:28.209 write: IOPS=59, BW=14.9MiB/s (15.6MB/s)(153MiB/10244msec); 0 zone resets 00:21:28.209 slat (usec): min=22, max=9397, avg=70.16, stdev=378.94 00:21:28.209 clat (msec): min=28, max=478, avg=267.98, stdev=29.24 00:21:28.209 lat (msec): min=37, max=478, avg=268.05, stdev=29.11 00:21:28.209 clat percentiles (msec): 00:21:28.209 | 1.00th=[ 129], 5.00th=[ 262], 10.00th=[ 264], 20.00th=[ 266], 00:21:28.209 | 30.00th=[ 268], 40.00th=[ 268], 50.00th=[ 271], 60.00th=[ 271], 00:21:28.209 | 70.00th=[ 271], 80.00th=[ 271], 90.00th=[ 271], 95.00th=[ 279], 00:21:28.209 | 99.00th=[ 376], 99.50th=[ 430], 99.90th=[ 481], 99.95th=[ 481], 00:21:28.209 | 99.99th=[ 481] 00:21:28.209 bw ( KiB/s): min=14336, max=15840, per=3.32%, avg=15200.20, stdev=333.44, samples=20 00:21:28.209 iops : min= 56, max= 61, avg=59.20, stdev= 1.28, samples=20 00:21:28.209 lat (msec) : 50=0.33%, 100=0.49%, 250=1.64%, 500=97.54% 00:21:28.209 cpu : usr=0.19%, sys=0.29%, ctx=613, majf=0, minf=1 00:21:28.209 IO depths : 1=0.2%, 2=0.3%, 4=0.7%, 8=1.3%, 16=97.5%, 32=0.0%, >=64=0.0% 00:21:28.209 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:28.209 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:28.209 issued rwts: total=0,610,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:28.209 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:28.209 job9: (groupid=0, jobs=1): err= 0: pid=81540: Tue Jul 23 02:16:35 2024 00:21:28.209 write: IOPS=59, BW=14.9MiB/s (15.6MB/s)(153MiB/10230msec); 0 zone resets 00:21:28.209 slat (usec): min=17, max=304, avg=70.28, stdev=37.69 00:21:28.209 clat (msec): min=31, max=470, avg=267.87, stdev=28.22 00:21:28.209 lat (msec): min=31, max=470, avg=267.94, stdev=28.22 00:21:28.209 clat percentiles (msec): 00:21:28.209 | 1.00th=[ 133], 5.00th=[ 262], 10.00th=[ 264], 20.00th=[ 266], 00:21:28.209 | 30.00th=[ 268], 40.00th=[ 268], 50.00th=[ 271], 60.00th=[ 271], 00:21:28.209 | 70.00th=[ 271], 80.00th=[ 271], 90.00th=[ 271], 95.00th=[ 279], 00:21:28.209 | 99.00th=[ 372], 99.50th=[ 422], 99.90th=[ 472], 99.95th=[ 472], 00:21:28.209 | 99.99th=[ 472] 00:21:28.209 bw ( KiB/s): min=14336, max=15840, per=3.33%, avg=15224.30, stdev=326.42, samples=20 00:21:28.209 iops : min= 56, max= 61, avg=59.25, stdev= 1.33, samples=20 00:21:28.209 lat (msec) : 50=0.33%, 100=0.49%, 250=1.48%, 500=97.70% 00:21:28.209 cpu : usr=0.17%, sys=0.27%, ctx=670, majf=0, minf=1 00:21:28.209 IO depths : 1=0.2%, 2=0.3%, 4=0.7%, 8=1.3%, 16=97.5%, 32=0.0%, >=64=0.0% 00:21:28.209 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:28.209 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:28.209 issued rwts: total=0,610,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:28.209 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:28.209 job10: (groupid=0, jobs=1): err= 0: pid=81558: Tue Jul 23 02:16:35 2024 00:21:28.209 write: IOPS=59, BW=14.9MiB/s (15.6MB/s)(153MiB/10232msec); 0 zone resets 00:21:28.209 slat (usec): min=24, max=193, avg=54.18, stdev=11.86 00:21:28.209 clat (msec): min=30, max=474, avg=267.93, stdev=28.67 00:21:28.209 lat (msec): min=30, max=474, avg=267.98, stdev=28.67 00:21:28.209 clat percentiles (msec): 00:21:28.209 | 1.00th=[ 131], 5.00th=[ 262], 10.00th=[ 264], 20.00th=[ 266], 00:21:28.209 | 30.00th=[ 268], 40.00th=[ 268], 50.00th=[ 271], 60.00th=[ 271], 00:21:28.209 | 70.00th=[ 271], 80.00th=[ 271], 90.00th=[ 271], 95.00th=[ 279], 00:21:28.209 | 99.00th=[ 372], 99.50th=[ 422], 99.90th=[ 477], 99.95th=[ 477], 00:21:28.209 | 99.99th=[ 477] 00:21:28.209 bw ( KiB/s): min=14307, max=15840, per=3.33%, avg=15224.30, stdev=366.11, samples=20 00:21:28.209 iops : min= 55, max= 61, avg=59.25, stdev= 1.48, samples=20 00:21:28.209 lat (msec) : 50=0.33%, 100=0.49%, 250=1.48%, 500=97.70% 00:21:28.209 cpu : usr=0.19%, sys=0.29%, ctx=611, majf=0, minf=1 00:21:28.209 IO depths : 1=0.2%, 2=0.3%, 4=0.7%, 8=1.3%, 16=97.5%, 32=0.0%, >=64=0.0% 00:21:28.209 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:28.209 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:28.209 issued rwts: total=0,610,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:28.209 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:28.209 job11: (groupid=0, jobs=1): err= 0: pid=81598: Tue Jul 23 02:16:35 2024 00:21:28.209 write: IOPS=60, BW=15.1MiB/s (15.8MB/s)(154MiB/10247msec); 0 zone resets 00:21:28.209 slat (usec): min=25, max=206, avg=58.76, stdev=15.54 00:21:28.209 clat (usec): min=1862, max=490245, avg=265246.11, stdev=40653.42 00:21:28.209 lat (usec): min=1919, max=490305, avg=265304.86, stdev=40658.31 00:21:28.209 clat percentiles (msec): 00:21:28.209 | 1.00th=[ 24], 5.00th=[ 262], 10.00th=[ 262], 20.00th=[ 266], 00:21:28.209 | 30.00th=[ 268], 40.00th=[ 268], 50.00th=[ 271], 60.00th=[ 271], 00:21:28.209 | 70.00th=[ 271], 80.00th=[ 271], 90.00th=[ 271], 95.00th=[ 279], 00:21:28.209 | 99.00th=[ 388], 99.50th=[ 439], 99.90th=[ 489], 99.95th=[ 489], 00:21:28.209 | 99.99th=[ 489] 00:21:28.209 bw ( KiB/s): min=14848, max=17920, per=3.37%, avg=15406.50, stdev=640.36, samples=20 00:21:28.209 iops : min= 58, max= 70, avg=60.05, stdev= 2.50, samples=20 00:21:28.209 lat (msec) : 2=0.16%, 4=0.16%, 10=0.49%, 20=0.16%, 50=0.49% 00:21:28.209 lat (msec) : 100=0.49%, 250=1.62%, 500=96.43% 00:21:28.209 cpu : usr=0.22%, sys=0.27%, ctx=622, majf=0, minf=1 00:21:28.209 IO depths : 1=0.2%, 2=0.3%, 4=0.6%, 8=1.3%, 16=97.6%, 32=0.0%, >=64=0.0% 00:21:28.209 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:28.209 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:28.209 issued rwts: total=0,617,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:28.209 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:28.209 job12: (groupid=0, jobs=1): err= 0: pid=81611: Tue Jul 23 02:16:35 2024 00:21:28.209 write: IOPS=59, BW=14.9MiB/s (15.6MB/s)(153MiB/10238msec); 0 zone resets 00:21:28.209 slat (usec): min=30, max=4726, avg=66.32, stdev=189.40 00:21:28.209 clat (msec): min=27, max=478, avg=267.96, stdev=29.32 00:21:28.209 lat (msec): min=32, max=478, avg=268.03, stdev=29.26 00:21:28.209 clat percentiles (msec): 00:21:28.209 | 1.00th=[ 128], 5.00th=[ 262], 10.00th=[ 264], 20.00th=[ 266], 00:21:28.209 | 30.00th=[ 268], 40.00th=[ 268], 50.00th=[ 271], 60.00th=[ 271], 00:21:28.209 | 70.00th=[ 271], 80.00th=[ 271], 90.00th=[ 271], 95.00th=[ 279], 00:21:28.209 | 99.00th=[ 376], 99.50th=[ 430], 99.90th=[ 481], 99.95th=[ 481], 00:21:28.209 | 99.99th=[ 481] 00:21:28.209 bw ( KiB/s): min=14336, max=15840, per=3.33%, avg=15224.25, stdev=364.03, samples=20 00:21:28.209 iops : min= 56, max= 61, avg=59.25, stdev= 1.41, samples=20 00:21:28.209 lat (msec) : 50=0.33%, 100=0.49%, 250=1.64%, 500=97.54% 00:21:28.209 cpu : usr=0.21%, sys=0.28%, ctx=617, majf=0, minf=1 00:21:28.209 IO depths : 1=0.2%, 2=0.3%, 4=0.7%, 8=1.3%, 16=97.5%, 32=0.0%, >=64=0.0% 00:21:28.209 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:28.209 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:28.209 issued rwts: total=0,610,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:28.209 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:28.209 job13: (groupid=0, jobs=1): err= 0: pid=81656: Tue Jul 23 02:16:35 2024 00:21:28.209 write: IOPS=59, BW=14.9MiB/s (15.6MB/s)(153MiB/10231msec); 0 zone resets 00:21:28.209 slat (usec): min=23, max=270, avg=55.14, stdev=19.00 00:21:28.209 clat (msec): min=32, max=471, avg=267.92, stdev=28.30 00:21:28.209 lat (msec): min=32, max=471, avg=267.97, stdev=28.30 00:21:28.209 clat percentiles (msec): 00:21:28.209 | 1.00th=[ 133], 5.00th=[ 262], 10.00th=[ 264], 20.00th=[ 266], 00:21:28.209 | 30.00th=[ 268], 40.00th=[ 268], 50.00th=[ 271], 60.00th=[ 271], 00:21:28.209 | 70.00th=[ 271], 80.00th=[ 271], 90.00th=[ 271], 95.00th=[ 279], 00:21:28.209 | 99.00th=[ 372], 99.50th=[ 422], 99.90th=[ 472], 99.95th=[ 472], 00:21:28.209 | 99.99th=[ 472] 00:21:28.209 bw ( KiB/s): min=14336, max=15840, per=3.33%, avg=15224.30, stdev=326.42, samples=20 00:21:28.209 iops : min= 56, max= 61, avg=59.25, stdev= 1.33, samples=20 00:21:28.209 lat (msec) : 50=0.33%, 100=0.49%, 250=1.48%, 500=97.70% 00:21:28.209 cpu : usr=0.13%, sys=0.35%, ctx=611, majf=0, minf=1 00:21:28.209 IO depths : 1=0.2%, 2=0.3%, 4=0.7%, 8=1.3%, 16=97.5%, 32=0.0%, >=64=0.0% 00:21:28.209 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:28.209 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:28.209 issued rwts: total=0,610,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:28.209 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:28.209 job14: (groupid=0, jobs=1): err= 0: pid=81675: Tue Jul 23 02:16:35 2024 00:21:28.209 write: IOPS=59, BW=14.9MiB/s (15.6MB/s)(153MiB/10236msec); 0 zone resets 00:21:28.209 slat (usec): min=18, max=175, avg=58.85, stdev=13.63 00:21:28.209 clat (msec): min=16, max=485, avg=268.02, stdev=30.64 00:21:28.209 lat (msec): min=16, max=485, avg=268.08, stdev=30.64 00:21:28.209 clat percentiles (msec): 00:21:28.209 | 1.00th=[ 122], 5.00th=[ 262], 10.00th=[ 264], 20.00th=[ 266], 00:21:28.209 | 30.00th=[ 268], 40.00th=[ 268], 50.00th=[ 271], 60.00th=[ 271], 00:21:28.209 | 70.00th=[ 271], 80.00th=[ 271], 90.00th=[ 275], 95.00th=[ 279], 00:21:28.209 | 99.00th=[ 384], 99.50th=[ 435], 99.90th=[ 485], 99.95th=[ 485], 00:21:28.209 | 99.99th=[ 485] 00:21:28.209 bw ( KiB/s): min=14818, max=15840, per=3.33%, avg=15225.80, stdev=278.78, samples=20 00:21:28.209 iops : min= 57, max= 61, avg=59.30, stdev= 1.08, samples=20 00:21:28.209 lat (msec) : 20=0.16%, 50=0.16%, 100=0.49%, 250=1.48%, 500=97.70% 00:21:28.209 cpu : usr=0.23%, sys=0.21%, ctx=612, majf=0, minf=1 00:21:28.209 IO depths : 1=0.2%, 2=0.3%, 4=0.7%, 8=1.3%, 16=97.5%, 32=0.0%, >=64=0.0% 00:21:28.210 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:28.210 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:28.210 issued rwts: total=0,610,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:28.210 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:28.210 job15: (groupid=0, jobs=1): err= 0: pid=81680: Tue Jul 23 02:16:35 2024 00:21:28.210 write: IOPS=59, BW=14.9MiB/s (15.6MB/s)(153MiB/10234msec); 0 zone resets 00:21:28.210 slat (usec): min=22, max=2491, avg=61.71, stdev=99.47 00:21:28.210 clat (msec): min=29, max=473, avg=267.91, stdev=28.73 00:21:28.210 lat (msec): min=32, max=473, avg=267.97, stdev=28.70 00:21:28.210 clat percentiles (msec): 00:21:28.210 | 1.00th=[ 130], 5.00th=[ 262], 10.00th=[ 264], 20.00th=[ 266], 00:21:28.210 | 30.00th=[ 268], 40.00th=[ 268], 50.00th=[ 271], 60.00th=[ 271], 00:21:28.210 | 70.00th=[ 271], 80.00th=[ 271], 90.00th=[ 271], 95.00th=[ 279], 00:21:28.210 | 99.00th=[ 372], 99.50th=[ 422], 99.90th=[ 472], 99.95th=[ 472], 00:21:28.210 | 99.99th=[ 472] 00:21:28.210 bw ( KiB/s): min=14336, max=15840, per=3.33%, avg=15227.35, stdev=365.09, samples=20 00:21:28.210 iops : min= 56, max= 61, avg=59.35, stdev= 1.42, samples=20 00:21:28.210 lat (msec) : 50=0.33%, 100=0.49%, 250=1.64%, 500=97.54% 00:21:28.210 cpu : usr=0.21%, sys=0.26%, ctx=616, majf=0, minf=1 00:21:28.210 IO depths : 1=0.2%, 2=0.3%, 4=0.7%, 8=1.3%, 16=97.5%, 32=0.0%, >=64=0.0% 00:21:28.210 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:28.210 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:28.210 issued rwts: total=0,610,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:28.210 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:28.210 job16: (groupid=0, jobs=1): err= 0: pid=81681: Tue Jul 23 02:16:35 2024 00:21:28.210 write: IOPS=59, BW=15.0MiB/s (15.7MB/s)(154MiB/10249msec); 0 zone resets 00:21:28.210 slat (usec): min=20, max=121, avg=41.72, stdev=12.32 00:21:28.210 clat (msec): min=2, max=485, avg=266.64, stdev=35.56 00:21:28.210 lat (msec): min=2, max=485, avg=266.68, stdev=35.56 00:21:28.210 clat percentiles (msec): 00:21:28.210 | 1.00th=[ 70], 5.00th=[ 262], 10.00th=[ 264], 20.00th=[ 266], 00:21:28.210 | 30.00th=[ 268], 40.00th=[ 268], 50.00th=[ 271], 60.00th=[ 271], 00:21:28.210 | 70.00th=[ 271], 80.00th=[ 271], 90.00th=[ 271], 95.00th=[ 279], 00:21:28.210 | 99.00th=[ 384], 99.50th=[ 435], 99.90th=[ 485], 99.95th=[ 485], 00:21:28.210 | 99.99th=[ 485] 00:21:28.210 bw ( KiB/s): min=14818, max=16384, per=3.34%, avg=15304.20, stdev=405.03, samples=20 00:21:28.210 iops : min= 57, max= 64, avg=59.65, stdev= 1.66, samples=20 00:21:28.210 lat (msec) : 4=0.16%, 10=0.16%, 20=0.16%, 50=0.33%, 100=0.49% 00:21:28.210 lat (msec) : 250=1.63%, 500=97.07% 00:21:28.210 cpu : usr=0.13%, sys=0.19%, ctx=628, majf=0, minf=1 00:21:28.210 IO depths : 1=0.2%, 2=0.3%, 4=0.7%, 8=1.3%, 16=97.6%, 32=0.0%, >=64=0.0% 00:21:28.210 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:28.210 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:28.210 issued rwts: total=0,614,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:28.210 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:28.210 job17: (groupid=0, jobs=1): err= 0: pid=81682: Tue Jul 23 02:16:35 2024 00:21:28.210 write: IOPS=59, BW=14.9MiB/s (15.6MB/s)(153MiB/10227msec); 0 zone resets 00:21:28.210 slat (usec): min=33, max=205, avg=60.00, stdev=13.26 00:21:28.210 clat (msec): min=30, max=468, avg=267.80, stdev=28.18 00:21:28.210 lat (msec): min=30, max=468, avg=267.86, stdev=28.18 00:21:28.210 clat percentiles (msec): 00:21:28.210 | 1.00th=[ 132], 5.00th=[ 262], 10.00th=[ 264], 20.00th=[ 266], 00:21:28.210 | 30.00th=[ 268], 40.00th=[ 268], 50.00th=[ 271], 60.00th=[ 271], 00:21:28.210 | 70.00th=[ 271], 80.00th=[ 271], 90.00th=[ 271], 95.00th=[ 279], 00:21:28.210 | 99.00th=[ 368], 99.50th=[ 418], 99.90th=[ 468], 99.95th=[ 468], 00:21:28.210 | 99.99th=[ 468] 00:21:28.210 bw ( KiB/s): min=14364, max=15840, per=3.33%, avg=15228.75, stdev=321.10, samples=20 00:21:28.210 iops : min= 56, max= 61, avg=59.35, stdev= 1.27, samples=20 00:21:28.210 lat (msec) : 50=0.33%, 100=0.49%, 250=1.48%, 500=97.70% 00:21:28.210 cpu : usr=0.20%, sys=0.30%, ctx=622, majf=0, minf=1 00:21:28.210 IO depths : 1=0.2%, 2=0.3%, 4=0.7%, 8=1.3%, 16=97.5%, 32=0.0%, >=64=0.0% 00:21:28.210 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:28.210 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:28.210 issued rwts: total=0,610,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:28.210 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:28.210 job18: (groupid=0, jobs=1): err= 0: pid=81683: Tue Jul 23 02:16:35 2024 00:21:28.210 write: IOPS=59, BW=14.9MiB/s (15.7MB/s)(153MiB/10251msec); 0 zone resets 00:21:28.210 slat (usec): min=27, max=14161, avg=64.35, stdev=570.47 00:21:28.210 clat (msec): min=2, max=489, avg=266.72, stdev=35.94 00:21:28.210 lat (msec): min=7, max=489, avg=266.78, stdev=35.77 00:21:28.210 clat percentiles (msec): 00:21:28.210 | 1.00th=[ 69], 5.00th=[ 262], 10.00th=[ 264], 20.00th=[ 266], 00:21:28.210 | 30.00th=[ 268], 40.00th=[ 268], 50.00th=[ 271], 60.00th=[ 271], 00:21:28.210 | 70.00th=[ 271], 80.00th=[ 271], 90.00th=[ 271], 95.00th=[ 279], 00:21:28.210 | 99.00th=[ 388], 99.50th=[ 439], 99.90th=[ 489], 99.95th=[ 489], 00:21:28.210 | 99.99th=[ 489] 00:21:28.210 bw ( KiB/s): min=14336, max=15872, per=3.34%, avg=15304.15, stdev=327.48, samples=20 00:21:28.210 iops : min= 56, max= 62, avg=59.65, stdev= 1.31, samples=20 00:21:28.210 lat (msec) : 4=0.16%, 10=0.16%, 20=0.33%, 50=0.16%, 100=0.49% 00:21:28.210 lat (msec) : 250=1.63%, 500=97.06% 00:21:28.210 cpu : usr=0.13%, sys=0.19%, ctx=625, majf=0, minf=1 00:21:28.210 IO depths : 1=0.2%, 2=0.3%, 4=0.7%, 8=1.3%, 16=97.6%, 32=0.0%, >=64=0.0% 00:21:28.210 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:28.210 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:28.210 issued rwts: total=0,613,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:28.210 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:28.210 job19: (groupid=0, jobs=1): err= 0: pid=81684: Tue Jul 23 02:16:35 2024 00:21:28.210 write: IOPS=59, BW=14.9MiB/s (15.6MB/s)(153MiB/10228msec); 0 zone resets 00:21:28.210 slat (usec): min=30, max=235, avg=57.81, stdev=15.65 00:21:28.210 clat (msec): min=32, max=468, avg=267.83, stdev=27.99 00:21:28.210 lat (msec): min=32, max=468, avg=267.89, stdev=27.99 00:21:28.210 clat percentiles (msec): 00:21:28.210 | 1.00th=[ 133], 5.00th=[ 262], 10.00th=[ 264], 20.00th=[ 266], 00:21:28.210 | 30.00th=[ 268], 40.00th=[ 268], 50.00th=[ 271], 60.00th=[ 271], 00:21:28.210 | 70.00th=[ 271], 80.00th=[ 271], 90.00th=[ 271], 95.00th=[ 279], 00:21:28.210 | 99.00th=[ 368], 99.50th=[ 418], 99.90th=[ 468], 99.95th=[ 468], 00:21:28.210 | 99.99th=[ 468] 00:21:28.210 bw ( KiB/s): min=14336, max=15840, per=3.33%, avg=15227.35, stdev=325.11, samples=20 00:21:28.210 iops : min= 56, max= 61, avg=59.35, stdev= 1.27, samples=20 00:21:28.210 lat (msec) : 50=0.33%, 100=0.49%, 250=1.48%, 500=97.70% 00:21:28.210 cpu : usr=0.20%, sys=0.20%, ctx=617, majf=0, minf=1 00:21:28.210 IO depths : 1=0.2%, 2=0.3%, 4=0.7%, 8=1.3%, 16=97.5%, 32=0.0%, >=64=0.0% 00:21:28.210 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:28.210 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:28.210 issued rwts: total=0,610,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:28.210 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:28.210 job20: (groupid=0, jobs=1): err= 0: pid=81685: Tue Jul 23 02:16:35 2024 00:21:28.210 write: IOPS=59, BW=14.9MiB/s (15.7MB/s)(153MiB/10248msec); 0 zone resets 00:21:28.210 slat (usec): min=22, max=240, avg=50.11, stdev=20.44 00:21:28.210 clat (msec): min=13, max=481, avg=267.46, stdev=31.71 00:21:28.210 lat (msec): min=13, max=481, avg=267.51, stdev=31.71 00:21:28.210 clat percentiles (msec): 00:21:28.210 | 1.00th=[ 106], 5.00th=[ 262], 10.00th=[ 264], 20.00th=[ 266], 00:21:28.210 | 30.00th=[ 268], 40.00th=[ 268], 50.00th=[ 271], 60.00th=[ 271], 00:21:28.210 | 70.00th=[ 271], 80.00th=[ 271], 90.00th=[ 271], 95.00th=[ 279], 00:21:28.210 | 99.00th=[ 380], 99.50th=[ 430], 99.90th=[ 481], 99.95th=[ 481], 00:21:28.210 | 99.99th=[ 481] 00:21:28.210 bw ( KiB/s): min=14818, max=15872, per=3.33%, avg=15249.85, stdev=312.65, samples=20 00:21:28.210 iops : min= 57, max= 62, avg=59.35, stdev= 1.23, samples=20 00:21:28.210 lat (msec) : 20=0.16%, 50=0.33%, 100=0.49%, 250=1.63%, 500=97.39% 00:21:28.210 cpu : usr=0.17%, sys=0.19%, ctx=656, majf=0, minf=1 00:21:28.210 IO depths : 1=0.2%, 2=0.3%, 4=0.7%, 8=1.3%, 16=97.5%, 32=0.0%, >=64=0.0% 00:21:28.210 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:28.210 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:28.210 issued rwts: total=0,612,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:28.210 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:28.210 job21: (groupid=0, jobs=1): err= 0: pid=81686: Tue Jul 23 02:16:35 2024 00:21:28.210 write: IOPS=59, BW=14.9MiB/s (15.6MB/s)(153MiB/10230msec); 0 zone resets 00:21:28.210 slat (usec): min=17, max=153, avg=57.34, stdev=12.48 00:21:28.210 clat (msec): min=30, max=472, avg=267.88, stdev=28.55 00:21:28.210 lat (msec): min=30, max=472, avg=267.94, stdev=28.55 00:21:28.210 clat percentiles (msec): 00:21:28.210 | 1.00th=[ 131], 5.00th=[ 262], 10.00th=[ 264], 20.00th=[ 266], 00:21:28.210 | 30.00th=[ 268], 40.00th=[ 268], 50.00th=[ 271], 60.00th=[ 271], 00:21:28.210 | 70.00th=[ 271], 80.00th=[ 271], 90.00th=[ 271], 95.00th=[ 279], 00:21:28.210 | 99.00th=[ 372], 99.50th=[ 422], 99.90th=[ 472], 99.95th=[ 472], 00:21:28.210 | 99.99th=[ 472] 00:21:28.210 bw ( KiB/s): min=14307, max=15840, per=3.33%, avg=15224.30, stdev=366.11, samples=20 00:21:28.210 iops : min= 55, max= 61, avg=59.25, stdev= 1.48, samples=20 00:21:28.210 lat (msec) : 50=0.33%, 100=0.49%, 250=1.48%, 500=97.70% 00:21:28.210 cpu : usr=0.22%, sys=0.27%, ctx=616, majf=0, minf=1 00:21:28.210 IO depths : 1=0.2%, 2=0.3%, 4=0.7%, 8=1.3%, 16=97.5%, 32=0.0%, >=64=0.0% 00:21:28.210 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:28.210 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:28.210 issued rwts: total=0,610,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:28.210 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:28.210 job22: (groupid=0, jobs=1): err= 0: pid=81687: Tue Jul 23 02:16:35 2024 00:21:28.210 write: IOPS=59, BW=14.9MiB/s (15.6MB/s)(153MiB/10241msec); 0 zone resets 00:21:28.210 slat (usec): min=26, max=1465, avg=56.32, stdev=59.17 00:21:28.210 clat (msec): min=13, max=486, avg=267.68, stdev=31.77 00:21:28.210 lat (msec): min=14, max=486, avg=267.73, stdev=31.75 00:21:28.210 clat percentiles (msec): 00:21:28.210 | 1.00th=[ 110], 5.00th=[ 262], 10.00th=[ 264], 20.00th=[ 266], 00:21:28.210 | 30.00th=[ 268], 40.00th=[ 268], 50.00th=[ 271], 60.00th=[ 271], 00:21:28.210 | 70.00th=[ 271], 80.00th=[ 271], 90.00th=[ 271], 95.00th=[ 279], 00:21:28.210 | 99.00th=[ 384], 99.50th=[ 435], 99.90th=[ 485], 99.95th=[ 485], 00:21:28.210 | 99.99th=[ 485] 00:21:28.210 bw ( KiB/s): min=14848, max=15840, per=3.33%, avg=15249.80, stdev=261.78, samples=20 00:21:28.210 iops : min= 58, max= 61, avg=59.35, stdev= 0.93, samples=20 00:21:28.210 lat (msec) : 20=0.16%, 50=0.33%, 100=0.49%, 250=1.47%, 500=97.55% 00:21:28.210 cpu : usr=0.15%, sys=0.29%, ctx=628, majf=0, minf=1 00:21:28.210 IO depths : 1=0.2%, 2=0.3%, 4=0.7%, 8=1.3%, 16=97.5%, 32=0.0%, >=64=0.0% 00:21:28.210 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:28.210 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:28.210 issued rwts: total=0,611,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:28.210 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:28.210 job23: (groupid=0, jobs=1): err= 0: pid=81688: Tue Jul 23 02:16:35 2024 00:21:28.210 write: IOPS=59, BW=14.9MiB/s (15.7MB/s)(153MiB/10244msec); 0 zone resets 00:21:28.210 slat (usec): min=27, max=101, avg=56.15, stdev= 9.40 00:21:28.210 clat (msec): min=8, max=486, avg=267.34, stdev=33.10 00:21:28.210 lat (msec): min=8, max=486, avg=267.40, stdev=33.11 00:21:28.210 clat percentiles (msec): 00:21:28.210 | 1.00th=[ 96], 5.00th=[ 262], 10.00th=[ 264], 20.00th=[ 266], 00:21:28.210 | 30.00th=[ 268], 40.00th=[ 268], 50.00th=[ 271], 60.00th=[ 271], 00:21:28.210 | 70.00th=[ 271], 80.00th=[ 271], 90.00th=[ 271], 95.00th=[ 279], 00:21:28.210 | 99.00th=[ 384], 99.50th=[ 435], 99.90th=[ 485], 99.95th=[ 485], 00:21:28.210 | 99.99th=[ 485] 00:21:28.210 bw ( KiB/s): min=14848, max=15840, per=3.34%, avg=15275.40, stdev=244.91, samples=20 00:21:28.210 iops : min= 58, max= 61, avg=59.45, stdev= 0.89, samples=20 00:21:28.210 lat (msec) : 10=0.16%, 20=0.16%, 50=0.33%, 100=0.49%, 250=1.47% 00:21:28.210 lat (msec) : 500=97.39% 00:21:28.210 cpu : usr=0.17%, sys=0.32%, ctx=612, majf=0, minf=1 00:21:28.210 IO depths : 1=0.2%, 2=0.3%, 4=0.7%, 8=1.3%, 16=97.5%, 32=0.0%, >=64=0.0% 00:21:28.210 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:28.210 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:28.210 issued rwts: total=0,612,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:28.210 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:28.210 job24: (groupid=0, jobs=1): err= 0: pid=81689: Tue Jul 23 02:16:35 2024 00:21:28.210 write: IOPS=59, BW=14.9MiB/s (15.6MB/s)(153MiB/10242msec); 0 zone resets 00:21:28.210 slat (usec): min=20, max=3556, avg=60.48, stdev=142.56 00:21:28.210 clat (msec): min=12, max=481, avg=267.67, stdev=31.12 00:21:28.210 lat (msec): min=12, max=481, avg=267.73, stdev=31.09 00:21:28.210 clat percentiles (msec): 00:21:28.210 | 1.00th=[ 113], 5.00th=[ 262], 10.00th=[ 264], 20.00th=[ 266], 00:21:28.210 | 30.00th=[ 268], 40.00th=[ 268], 50.00th=[ 271], 60.00th=[ 271], 00:21:28.210 | 70.00th=[ 271], 80.00th=[ 271], 90.00th=[ 275], 95.00th=[ 279], 00:21:28.210 | 99.00th=[ 380], 99.50th=[ 430], 99.90th=[ 481], 99.95th=[ 481], 00:21:28.210 | 99.99th=[ 481] 00:21:28.210 bw ( KiB/s): min=14336, max=15840, per=3.33%, avg=15251.40, stdev=313.14, samples=20 00:21:28.210 iops : min= 56, max= 61, avg=59.40, stdev= 1.23, samples=20 00:21:28.210 lat (msec) : 20=0.16%, 50=0.33%, 100=0.49%, 250=1.64%, 500=97.38% 00:21:28.210 cpu : usr=0.15%, sys=0.32%, ctx=619, majf=0, minf=1 00:21:28.210 IO depths : 1=0.2%, 2=0.3%, 4=0.7%, 8=1.3%, 16=97.5%, 32=0.0%, >=64=0.0% 00:21:28.210 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:28.210 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:28.210 issued rwts: total=0,611,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:28.210 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:28.210 job25: (groupid=0, jobs=1): err= 0: pid=81690: Tue Jul 23 02:16:35 2024 00:21:28.210 write: IOPS=59, BW=14.9MiB/s (15.6MB/s)(153MiB/10229msec); 0 zone resets 00:21:28.210 slat (usec): min=20, max=116, avg=53.82, stdev= 9.99 00:21:28.210 clat (msec): min=32, max=468, avg=267.85, stdev=28.02 00:21:28.210 lat (msec): min=32, max=468, avg=267.90, stdev=28.03 00:21:28.210 clat percentiles (msec): 00:21:28.210 | 1.00th=[ 133], 5.00th=[ 262], 10.00th=[ 264], 20.00th=[ 266], 00:21:28.210 | 30.00th=[ 268], 40.00th=[ 268], 50.00th=[ 271], 60.00th=[ 271], 00:21:28.210 | 70.00th=[ 271], 80.00th=[ 271], 90.00th=[ 271], 95.00th=[ 279], 00:21:28.210 | 99.00th=[ 368], 99.50th=[ 418], 99.90th=[ 468], 99.95th=[ 468], 00:21:28.210 | 99.99th=[ 468] 00:21:28.210 bw ( KiB/s): min=14336, max=15840, per=3.33%, avg=15227.35, stdev=325.11, samples=20 00:21:28.210 iops : min= 56, max= 61, avg=59.35, stdev= 1.27, samples=20 00:21:28.210 lat (msec) : 50=0.33%, 100=0.49%, 250=1.48%, 500=97.70% 00:21:28.210 cpu : usr=0.21%, sys=0.26%, ctx=611, majf=0, minf=1 00:21:28.210 IO depths : 1=0.2%, 2=0.3%, 4=0.7%, 8=1.3%, 16=97.5%, 32=0.0%, >=64=0.0% 00:21:28.210 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:28.210 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:28.210 issued rwts: total=0,610,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:28.210 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:28.211 job26: (groupid=0, jobs=1): err= 0: pid=81691: Tue Jul 23 02:16:35 2024 00:21:28.211 write: IOPS=59, BW=14.9MiB/s (15.6MB/s)(153MiB/10230msec); 0 zone resets 00:21:28.211 slat (usec): min=18, max=219, avg=60.63, stdev=15.08 00:21:28.211 clat (msec): min=31, max=470, avg=267.88, stdev=28.23 00:21:28.211 lat (msec): min=31, max=470, avg=267.94, stdev=28.23 00:21:28.211 clat percentiles (msec): 00:21:28.211 | 1.00th=[ 133], 5.00th=[ 262], 10.00th=[ 264], 20.00th=[ 266], 00:21:28.211 | 30.00th=[ 268], 40.00th=[ 268], 50.00th=[ 271], 60.00th=[ 271], 00:21:28.211 | 70.00th=[ 271], 80.00th=[ 271], 90.00th=[ 271], 95.00th=[ 279], 00:21:28.211 | 99.00th=[ 372], 99.50th=[ 422], 99.90th=[ 472], 99.95th=[ 472], 00:21:28.211 | 99.99th=[ 472] 00:21:28.211 bw ( KiB/s): min=14336, max=15840, per=3.33%, avg=15224.30, stdev=326.42, samples=20 00:21:28.211 iops : min= 56, max= 61, avg=59.25, stdev= 1.33, samples=20 00:21:28.211 lat (msec) : 50=0.33%, 100=0.49%, 250=1.48%, 500=97.70% 00:21:28.211 cpu : usr=0.23%, sys=0.26%, ctx=614, majf=0, minf=1 00:21:28.211 IO depths : 1=0.2%, 2=0.3%, 4=0.7%, 8=1.3%, 16=97.5%, 32=0.0%, >=64=0.0% 00:21:28.211 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:28.211 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:28.211 issued rwts: total=0,610,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:28.211 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:28.211 job27: (groupid=0, jobs=1): err= 0: pid=81692: Tue Jul 23 02:16:35 2024 00:21:28.211 write: IOPS=59, BW=15.0MiB/s (15.7MB/s)(153MiB/10246msec); 0 zone resets 00:21:28.211 slat (usec): min=24, max=205, avg=46.13, stdev=13.67 00:21:28.211 clat (msec): min=4, max=486, avg=266.98, stdev=34.58 00:21:28.211 lat (msec): min=5, max=486, avg=267.03, stdev=34.58 00:21:28.211 clat percentiles (msec): 00:21:28.211 | 1.00th=[ 82], 5.00th=[ 262], 10.00th=[ 264], 20.00th=[ 266], 00:21:28.211 | 30.00th=[ 268], 40.00th=[ 268], 50.00th=[ 271], 60.00th=[ 271], 00:21:28.211 | 70.00th=[ 271], 80.00th=[ 271], 90.00th=[ 271], 95.00th=[ 279], 00:21:28.211 | 99.00th=[ 388], 99.50th=[ 435], 99.90th=[ 489], 99.95th=[ 489], 00:21:28.211 | 99.99th=[ 489] 00:21:28.211 bw ( KiB/s): min=14336, max=15872, per=3.34%, avg=15304.15, stdev=327.48, samples=20 00:21:28.211 iops : min= 56, max= 62, avg=59.65, stdev= 1.31, samples=20 00:21:28.211 lat (msec) : 10=0.16%, 20=0.33%, 50=0.33%, 100=0.49%, 250=1.47% 00:21:28.211 lat (msec) : 500=97.23% 00:21:28.211 cpu : usr=0.21%, sys=0.13%, ctx=625, majf=0, minf=1 00:21:28.211 IO depths : 1=0.2%, 2=0.3%, 4=0.7%, 8=1.3%, 16=97.6%, 32=0.0%, >=64=0.0% 00:21:28.211 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:28.211 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:28.211 issued rwts: total=0,613,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:28.211 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:28.211 job28: (groupid=0, jobs=1): err= 0: pid=81693: Tue Jul 23 02:16:35 2024 00:21:28.211 write: IOPS=59, BW=14.9MiB/s (15.6MB/s)(153MiB/10235msec); 0 zone resets 00:21:28.211 slat (usec): min=17, max=4351, avg=56.02, stdev=175.48 00:21:28.211 clat (msec): min=26, max=474, avg=267.93, stdev=28.87 00:21:28.211 lat (msec): min=26, max=474, avg=267.99, stdev=28.83 00:21:28.211 clat percentiles (msec): 00:21:28.211 | 1.00th=[ 130], 5.00th=[ 262], 10.00th=[ 264], 20.00th=[ 266], 00:21:28.211 | 30.00th=[ 268], 40.00th=[ 268], 50.00th=[ 271], 60.00th=[ 271], 00:21:28.211 | 70.00th=[ 271], 80.00th=[ 271], 90.00th=[ 271], 95.00th=[ 279], 00:21:28.211 | 99.00th=[ 372], 99.50th=[ 422], 99.90th=[ 477], 99.95th=[ 477], 00:21:28.211 | 99.99th=[ 477] 00:21:28.211 bw ( KiB/s): min=14364, max=15872, per=3.33%, avg=15228.80, stdev=323.79, samples=20 00:21:28.211 iops : min= 56, max= 62, avg=59.35, stdev= 1.35, samples=20 00:21:28.211 lat (msec) : 50=0.33%, 100=0.49%, 250=1.64%, 500=97.54% 00:21:28.211 cpu : usr=0.09%, sys=0.27%, ctx=629, majf=0, minf=1 00:21:28.211 IO depths : 1=0.2%, 2=0.3%, 4=0.7%, 8=1.3%, 16=97.5%, 32=0.0%, >=64=0.0% 00:21:28.211 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:28.211 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:28.211 issued rwts: total=0,610,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:28.211 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:28.211 job29: (groupid=0, jobs=1): err= 0: pid=81694: Tue Jul 23 02:16:35 2024 00:21:28.211 write: IOPS=59, BW=14.9MiB/s (15.6MB/s)(153MiB/10235msec); 0 zone resets 00:21:28.211 slat (usec): min=17, max=2327, avg=61.70, stdev=92.67 00:21:28.211 clat (msec): min=29, max=475, avg=267.95, stdev=28.84 00:21:28.211 lat (msec): min=32, max=475, avg=268.01, stdev=28.81 00:21:28.211 clat percentiles (msec): 00:21:28.211 | 1.00th=[ 131], 5.00th=[ 262], 10.00th=[ 264], 20.00th=[ 266], 00:21:28.211 | 30.00th=[ 268], 40.00th=[ 268], 50.00th=[ 271], 60.00th=[ 271], 00:21:28.211 | 70.00th=[ 271], 80.00th=[ 271], 90.00th=[ 271], 95.00th=[ 279], 00:21:28.211 | 99.00th=[ 376], 99.50th=[ 426], 99.90th=[ 477], 99.95th=[ 477], 00:21:28.211 | 99.99th=[ 477] 00:21:28.211 bw ( KiB/s): min=14336, max=15872, per=3.33%, avg=15227.40, stdev=327.77, samples=20 00:21:28.211 iops : min= 56, max= 62, avg=59.35, stdev= 1.35, samples=20 00:21:28.211 lat (msec) : 50=0.33%, 100=0.49%, 250=1.48%, 500=97.70% 00:21:28.211 cpu : usr=0.23%, sys=0.25%, ctx=616, majf=0, minf=1 00:21:28.211 IO depths : 1=0.2%, 2=0.3%, 4=0.7%, 8=1.3%, 16=97.5%, 32=0.0%, >=64=0.0% 00:21:28.211 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:28.211 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:28.211 issued rwts: total=0,610,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:28.211 latency : target=0, window=0, percentile=100.00%, depth=16 00:21:28.211 00:21:28.211 Run status group 0 (all jobs): 00:21:28.211 WRITE: bw=447MiB/s (469MB/s), 14.9MiB/s-15.1MiB/s (15.6MB/s-15.8MB/s), io=4584MiB (4806MB), run=10227-10254msec 00:21:28.211 00:21:28.211 Disk stats (read/write): 00:21:28.211 sda: ios=48/599, merge=0/0, ticks=136/158876, in_queue=159012, util=95.09% 00:21:28.211 sdb: ios=48/598, merge=0/0, ticks=140/158811, in_queue=158950, util=95.12% 00:21:28.211 sdc: ios=48/600, merge=0/0, ticks=127/158941, in_queue=159069, util=95.50% 00:21:28.211 sdd: ios=48/607, merge=0/0, ticks=148/159254, in_queue=159403, util=95.84% 00:21:28.211 sde: ios=43/598, merge=0/0, ticks=129/158832, in_queue=158962, util=95.51% 00:21:28.211 sdf: ios=48/597, merge=0/0, ticks=162/158617, in_queue=158778, util=95.73% 00:21:28.211 sdg: ios=34/598, merge=0/0, ticks=118/158843, in_queue=158960, util=95.50% 00:21:28.211 sdh: ios=28/598, merge=0/0, ticks=111/158817, in_queue=158928, util=95.71% 00:21:28.211 sdi: ios=24/598, merge=0/0, ticks=99/158826, in_queue=158925, util=95.72% 00:21:28.211 sdj: ios=5/597, merge=0/0, ticks=25/158600, in_queue=158625, util=95.56% 00:21:28.211 sdk: ios=13/597, merge=0/0, ticks=46/158594, in_queue=158640, util=95.73% 00:21:28.211 sdl: ios=0/607, merge=0/0, ticks=0/159369, in_queue=159369, util=96.28% 00:21:28.211 sdm: ios=0/598, merge=0/0, ticks=0/158812, in_queue=158813, util=96.04% 00:21:28.211 sdn: ios=0/597, merge=0/0, ticks=0/158619, in_queue=158619, util=96.20% 00:21:28.211 sdo: ios=0/599, merge=0/0, ticks=0/159010, in_queue=159010, util=96.45% 00:21:28.211 sdp: ios=0/598, merge=0/0, ticks=0/158846, in_queue=158845, util=96.70% 00:21:28.211 sdq: ios=0/603, merge=0/0, ticks=0/159176, in_queue=159177, util=97.09% 00:21:28.211 sdr: ios=0/597, merge=0/0, ticks=0/158597, in_queue=158597, util=97.03% 00:21:28.211 sds: ios=0/603, merge=0/0, ticks=0/159176, in_queue=159177, util=97.46% 00:21:28.211 sdt: ios=0/597, merge=0/0, ticks=0/158611, in_queue=158611, util=97.36% 00:21:28.211 sdu: ios=0/600, merge=0/0, ticks=0/158960, in_queue=158959, util=97.63% 00:21:28.211 sdv: ios=0/597, merge=0/0, ticks=0/158588, in_queue=158588, util=97.63% 00:21:28.211 sdw: ios=0/600, merge=0/0, ticks=0/159037, in_queue=159037, util=97.92% 00:21:28.211 sdx: ios=0/601, merge=0/0, ticks=0/159120, in_queue=159121, util=98.10% 00:21:28.211 sdy: ios=0/599, merge=0/0, ticks=0/158867, in_queue=158866, util=98.02% 00:21:28.211 sdz: ios=0/597, merge=0/0, ticks=0/158619, in_queue=158619, util=97.94% 00:21:28.211 sdaa: ios=0/597, merge=0/0, ticks=0/158613, in_queue=158612, util=98.18% 00:21:28.211 sdab: ios=0/602, merge=0/0, ticks=0/159104, in_queue=159104, util=98.43% 00:21:28.211 sdac: ios=0/598, merge=0/0, ticks=0/158816, in_queue=158817, util=98.39% 00:21:28.211 sdad: ios=0/598, merge=0/0, ticks=0/158855, in_queue=158854, util=98.72% 00:21:28.211 [2024-07-23 02:16:35.591726] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:28.211 [2024-07-23 02:16:35.597668] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:28.211 [2024-07-23 02:16:35.600187] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:28.211 [2024-07-23 02:16:35.602748] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:28.211 [2024-07-23 02:16:35.605061] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:28.211 [2024-07-23 02:16:35.607419] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:28.211 02:16:35 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@79 -- # sync 00:21:28.211 [2024-07-23 02:16:35.610364] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:28.211 [2024-07-23 02:16:35.613603] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:28.211 [2024-07-23 02:16:35.617051] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:28.211 [2024-07-23 02:16:35.620288] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:28.211 [2024-07-23 02:16:35.623628] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:28.211 02:16:35 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@81 -- # trap - SIGINT SIGTERM EXIT 00:21:28.211 02:16:35 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@83 -- # rm -f 00:21:28.211 [2024-07-23 02:16:35.626676] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:28.211 02:16:35 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@84 -- # iscsicleanup 00:21:28.211 Cleaning up iSCSI connection 00:21:28.211 02:16:35 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@980 -- # echo 'Cleaning up iSCSI connection' 00:21:28.211 02:16:35 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@981 -- # iscsiadm -m node --logout 00:21:28.211 [2024-07-23 02:16:35.630087] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:28.211 Logging out of session [sid: 41, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] 00:21:28.211 Logging out of session [sid: 42, target: iqn.2016-06.io.spdk:Target2, portal: 10.0.0.1,3260] 00:21:28.211 Logging out of session [sid: 43, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] 00:21:28.211 Logging out of session [sid: 44, target: iqn.2016-06.io.spdk:Target4, portal: 10.0.0.1,3260] 00:21:28.211 Logging out of session [sid: 45, target: iqn.2016-06.io.spdk:Target5, portal: 10.0.0.1,3260] 00:21:28.211 Logging out of session [sid: 46, target: iqn.2016-06.io.spdk:Target6, portal: 10.0.0.1,3260] 00:21:28.211 Logging out of session [sid: 47, target: iqn.2016-06.io.spdk:Target7, portal: 10.0.0.1,3260] 00:21:28.211 Logging out of session [sid: 48, target: iqn.2016-06.io.spdk:Target8, portal: 10.0.0.1,3260] 00:21:28.211 Logging out of session [sid: 49, target: iqn.2016-06.io.spdk:Target9, portal: 10.0.0.1,3260] 00:21:28.211 Logging out of session [sid: 50, target: iqn.2016-06.io.spdk:Target10, portal: 10.0.0.1,3260] 00:21:28.211 Logging out of session [sid: 51, target: iqn.2016-06.io.spdk:Target11, portal: 10.0.0.1,3260] 00:21:28.211 Logging out of session [sid: 52, target: iqn.2016-06.io.spdk:Target12, portal: 10.0.0.1,3260] 00:21:28.211 Logging out of session [sid: 53, target: iqn.2016-06.io.spdk:Target13, portal: 10.0.0.1,3260] 00:21:28.211 Logging out of session [sid: 54, target: iqn.2016-06.io.spdk:Target14, portal: 10.0.0.1,3260] 00:21:28.211 Logging out of session [sid: 55, target: iqn.2016-06.io.spdk:Target15, portal: 10.0.0.1,3260] 00:21:28.211 Logging out of session [sid: 56, target: iqn.2016-06.io.spdk:Target16, portal: 10.0.0.1,3260] 00:21:28.211 Logging out of session [sid: 57, target: iqn.2016-06.io.spdk:Target17, portal: 10.0.0.1,3260] 00:21:28.211 Logging out of session [sid: 58, target: iqn.2016-06.io.spdk:Target18, portal: 10.0.0.1,3260] 00:21:28.211 Logging out of session [sid: 59, target: iqn.2016-06.io.spdk:Target19, portal: 10.0.0.1,3260] 00:21:28.211 Logging out of session [sid: 60, target: iqn.2016-06.io.spdk:Target20, portal: 10.0.0.1,3260] 00:21:28.211 Logging out of session [sid: 61, target: iqn.2016-06.io.spdk:Target21, portal: 10.0.0.1,3260] 00:21:28.211 Logging out of session [sid: 62, target: iqn.2016-06.io.spdk:Target22, portal: 10.0.0.1,3260] 00:21:28.211 Logging out of session [sid: 63, target: iqn.2016-06.io.spdk:Target23, portal: 10.0.0.1,3260] 00:21:28.211 Logging out of session [sid: 64, target: iqn.2016-06.io.spdk:Target24, portal: 10.0.0.1,3260] 00:21:28.211 Logging out of session [sid: 65, target: iqn.2016-06.io.spdk:Target25, portal: 10.0.0.1,3260] 00:21:28.211 Logging out of session [sid: 66, target: iqn.2016-06.io.spdk:Target26, portal: 10.0.0.1,3260] 00:21:28.211 Logging out of session [sid: 67, target: iqn.2016-06.io.spdk:Target27, portal: 10.0.0.1,3260] 00:21:28.211 Logging out of session [sid: 68, target: iqn.2016-06.io.spdk:Target28, portal: 10.0.0.1,3260] 00:21:28.211 Logging out of session [sid: 69, target: iqn.2016-06.io.spdk:Target29, portal: 10.0.0.1,3260] 00:21:28.211 Logging out of session [sid: 70, target: iqn.2016-06.io.spdk:Target30, portal: 10.0.0.1,3260] 00:21:28.211 Logout of [sid: 41, target: iqn.2016-06.io.spdk:Target1, portal: 10.0.0.1,3260] successful. 00:21:28.211 Logout of [sid: 42, target: iqn.2016-06.io.spdk:Target2, portal: 10.0.0.1,3260] successful. 00:21:28.211 Logout of [sid: 43, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] successful. 00:21:28.211 Logout of [sid: 44, target: iqn.2016-06.io.spdk:Target4, portal: 10.0.0.1,3260] successful. 00:21:28.211 Logout of [sid: 45, target: iqn.2016-06.io.spdk:Target5, portal: 10.0.0.1,3260] successful. 00:21:28.211 Logout of [sid: 46, target: iqn.2016-06.io.spdk:Target6, portal: 10.0.0.1,3260] successful. 00:21:28.211 Logout of [sid: 47, target: iqn.2016-06.io.spdk:Target7, portal: 10.0.0.1,3260] successful. 00:21:28.211 Logout of [sid: 48, target: iqn.2016-06.io.spdk:Target8, portal: 10.0.0.1,3260] successful. 00:21:28.211 Logout of [sid: 49, target: iqn.2016-06.io.spdk:Target9, portal: 10.0.0.1,3260] successful. 00:21:28.211 Logout of [sid: 50, target: iqn.2016-06.io.spdk:Target10, portal: 10.0.0.1,3260] successful. 00:21:28.211 Logout of [sid: 51, target: iqn.2016-06.io.spdk:Target11, portal: 10.0.0.1,3260] successful. 00:21:28.212 Logout of [sid: 52, target: iqn.2016-06.io.spdk:Target12, portal: 10.0.0.1,3260] successful. 00:21:28.212 Logout of [sid: 53, target: iqn.2016-06.io.spdk:Target13, portal: 10.0.0.1,3260] successful. 00:21:28.212 Logout of [sid: 54, target: iqn.2016-06.io.spdk:Target14, portal: 10.0.0.1,3260] successful. 00:21:28.212 Logout of [sid: 55, target: iqn.2016-06.io.spdk:Target15, portal: 10.0.0.1,3260] successful. 00:21:28.212 Logout of [sid: 56, target: iqn.2016-06.io.spdk:Target16, portal: 10.0.0.1,3260] successful. 00:21:28.212 Logout of [sid: 57, target: iqn.2016-06.io.spdk:Target17, portal: 10.0.0.1,3260] successful. 00:21:28.212 Logout of [sid: 58, target: iqn.2016-06.io.spdk:Target18, portal: 10.0.0.1,3260] successful. 00:21:28.212 Logout of [sid: 59, target: iqn.2016-06.io.spdk:Target19, portal: 10.0.0.1,3260] successful. 00:21:28.212 Logout of [sid: 60, target: iqn.2016-06.io.spdk:Target20, portal: 10.0.0.1,3260] successful. 00:21:28.212 Logout of [sid: 61, target: iqn.2016-06.io.spdk:Target21, portal: 10.0.0.1,3260] successful. 00:21:28.212 Logout of [sid: 62, target: iqn.2016-06.io.spdk:Target22, portal: 10.0.0.1,3260] successful. 00:21:28.212 Logout of [sid: 63, target: iqn.2016-06.io.spdk:Target23, portal: 10.0.0.1,3260] successful. 00:21:28.212 Logout of [sid: 64, target: iqn.2016-06.io.spdk:Target24, portal: 10.0.0.1,3260] successful. 00:21:28.212 Logout of [sid: 65, target: iqn.2016-06.io.spdk:Target25, portal: 10.0.0.1,3260] successful. 00:21:28.212 Logout of [sid: 66, target: iqn.2016-06.io.spdk:Target26, portal: 10.0.0.1,3260] successful. 00:21:28.212 Logout of [sid: 67, target: iqn.2016-06.io.spdk:Target27, portal: 10.0.0.1,3260] successful. 00:21:28.212 Logout of [sid: 68, target: iqn.2016-06.io.spdk:Target28, portal: 10.0.0.1,3260] successful. 00:21:28.212 Logout of [sid: 69, target: iqn.2016-06.io.spdk:Target29, portal: 10.0.0.1,3260] successful. 00:21:28.212 Logout of [sid: 70, target: iqn.2016-06.io.spdk:Target30, portal: 10.0.0.1,3260] successful. 00:21:28.212 02:16:36 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@982 -- # iscsiadm -m node -o delete 00:21:28.212 02:16:36 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@983 -- # rm -rf 00:21:28.212 02:16:36 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@85 -- # remove_backends 00:21:28.212 INFO: Removing lvol bdevs 00:21:28.212 02:16:36 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@22 -- # echo 'INFO: Removing lvol bdevs' 00:21:28.212 02:16:36 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # seq 1 30 00:21:28.212 02:16:36 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:21:28.212 02:16:36 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_1 00:21:28.212 02:16:36 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_1 00:21:28.212 [2024-07-23 02:16:36.755177] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (e7beb211-7587-43a0-98a8-5092276a79e8) received event(SPDK_BDEV_EVENT_REMOVE) 00:21:28.212 INFO: lvol bdev lvs0/lbd_1 removed 00:21:28.212 02:16:36 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_1 removed' 00:21:28.212 02:16:36 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:21:28.212 02:16:36 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_2 00:21:28.212 02:16:36 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_2 00:21:28.470 [2024-07-23 02:16:37.027236] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (0dc43563-59fa-42e9-9ec1-bba0e65e8927) received event(SPDK_BDEV_EVENT_REMOVE) 00:21:28.470 INFO: lvol bdev lvs0/lbd_2 removed 00:21:28.470 02:16:37 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_2 removed' 00:21:28.470 02:16:37 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:21:28.470 02:16:37 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_3 00:21:28.470 02:16:37 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_3 00:21:28.728 [2024-07-23 02:16:37.307376] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (4935aed5-b46c-45d0-a2f7-ca95cf345391) received event(SPDK_BDEV_EVENT_REMOVE) 00:21:28.728 INFO: lvol bdev lvs0/lbd_3 removed 00:21:28.728 02:16:37 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_3 removed' 00:21:28.728 02:16:37 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:21:28.728 02:16:37 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_4 00:21:28.728 02:16:37 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_4 00:21:28.986 [2024-07-23 02:16:37.559451] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (323c1985-7be6-4135-ac96-0f285758837a) received event(SPDK_BDEV_EVENT_REMOVE) 00:21:28.986 INFO: lvol bdev lvs0/lbd_4 removed 00:21:28.986 02:16:37 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_4 removed' 00:21:28.986 02:16:37 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:21:28.986 02:16:37 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_5 00:21:28.986 02:16:37 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_5 00:21:28.986 [2024-07-23 02:16:37.755548] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (fb481147-cc98-4f5f-bb32-8550c4e2f1d2) received event(SPDK_BDEV_EVENT_REMOVE) 00:21:29.244 INFO: lvol bdev lvs0/lbd_5 removed 00:21:29.244 02:16:37 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_5 removed' 00:21:29.244 02:16:37 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:21:29.244 02:16:37 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_6 00:21:29.244 02:16:37 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_6 00:21:29.244 [2024-07-23 02:16:37.943655] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (0d3dbfc0-3cef-4c5d-9614-fd6c63b74ff1) received event(SPDK_BDEV_EVENT_REMOVE) 00:21:29.244 INFO: lvol bdev lvs0/lbd_6 removed 00:21:29.244 02:16:37 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_6 removed' 00:21:29.244 02:16:37 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:21:29.244 02:16:37 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_7 00:21:29.244 02:16:37 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_7 00:21:29.502 [2024-07-23 02:16:38.127721] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (e373c7d4-4d2c-4532-891e-80771f359108) received event(SPDK_BDEV_EVENT_REMOVE) 00:21:29.502 INFO: lvol bdev lvs0/lbd_7 removed 00:21:29.502 02:16:38 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_7 removed' 00:21:29.502 02:16:38 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:21:29.502 02:16:38 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_8 00:21:29.502 02:16:38 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_8 00:21:29.760 [2024-07-23 02:16:38.315814] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (3bffd938-62b5-42e9-914c-953a2e4d1241) received event(SPDK_BDEV_EVENT_REMOVE) 00:21:29.760 INFO: lvol bdev lvs0/lbd_8 removed 00:21:29.760 02:16:38 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_8 removed' 00:21:29.760 02:16:38 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:21:29.760 02:16:38 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_9 00:21:29.760 02:16:38 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_9 00:21:29.760 [2024-07-23 02:16:38.503902] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (127efcf1-9125-4679-b032-688738472f77) received event(SPDK_BDEV_EVENT_REMOVE) 00:21:29.760 INFO: lvol bdev lvs0/lbd_9 removed 00:21:29.760 02:16:38 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_9 removed' 00:21:29.760 02:16:38 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:21:29.760 02:16:38 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_10 00:21:29.760 02:16:38 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_10 00:21:30.018 [2024-07-23 02:16:38.747967] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (5253997b-57b1-40f4-9f4b-443768f6ae80) received event(SPDK_BDEV_EVENT_REMOVE) 00:21:30.018 INFO: lvol bdev lvs0/lbd_10 removed 00:21:30.018 02:16:38 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_10 removed' 00:21:30.018 02:16:38 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:21:30.018 02:16:38 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_11 00:21:30.018 02:16:38 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_11 00:21:30.276 [2024-07-23 02:16:38.944151] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (2157996a-bdd5-473a-a9ae-c51cecb9cf5a) received event(SPDK_BDEV_EVENT_REMOVE) 00:21:30.276 INFO: lvol bdev lvs0/lbd_11 removed 00:21:30.276 02:16:38 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_11 removed' 00:21:30.276 02:16:38 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:21:30.276 02:16:38 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_12 00:21:30.276 02:16:38 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_12 00:21:30.535 [2024-07-23 02:16:39.140197] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (a0bb6bce-6837-47a0-b10b-2b3f912afc85) received event(SPDK_BDEV_EVENT_REMOVE) 00:21:30.535 INFO: lvol bdev lvs0/lbd_12 removed 00:21:30.535 02:16:39 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_12 removed' 00:21:30.535 02:16:39 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:21:30.535 02:16:39 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_13 00:21:30.535 02:16:39 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_13 00:21:30.792 [2024-07-23 02:16:39.344294] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (9363909b-c7cb-40f9-aafe-fe510f7ec47a) received event(SPDK_BDEV_EVENT_REMOVE) 00:21:30.792 INFO: lvol bdev lvs0/lbd_13 removed 00:21:30.792 02:16:39 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_13 removed' 00:21:30.792 02:16:39 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:21:30.792 02:16:39 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_14 00:21:30.792 02:16:39 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_14 00:21:30.792 [2024-07-23 02:16:39.548378] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (9a244f9a-7a24-4e69-ac2d-45c55557b976) received event(SPDK_BDEV_EVENT_REMOVE) 00:21:30.792 INFO: lvol bdev lvs0/lbd_14 removed 00:21:30.792 02:16:39 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_14 removed' 00:21:30.792 02:16:39 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:21:30.793 02:16:39 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_15 00:21:30.793 02:16:39 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_15 00:21:31.050 [2024-07-23 02:16:39.744445] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (66a46c06-289a-40e1-b196-b136df2556cb) received event(SPDK_BDEV_EVENT_REMOVE) 00:21:31.050 INFO: lvol bdev lvs0/lbd_15 removed 00:21:31.050 02:16:39 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_15 removed' 00:21:31.050 02:16:39 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:21:31.050 02:16:39 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_16 00:21:31.050 02:16:39 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_16 00:21:31.308 [2024-07-23 02:16:39.932545] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (9cf821c0-0eed-4afa-bb7d-a922600a6764) received event(SPDK_BDEV_EVENT_REMOVE) 00:21:31.308 INFO: lvol bdev lvs0/lbd_16 removed 00:21:31.308 02:16:39 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_16 removed' 00:21:31.308 02:16:39 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:21:31.308 02:16:39 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_17 00:21:31.308 02:16:39 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_17 00:21:31.565 [2024-07-23 02:16:40.172636] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (322cab25-17a8-405b-96ed-cd12c4a4ad0b) received event(SPDK_BDEV_EVENT_REMOVE) 00:21:31.565 INFO: lvol bdev lvs0/lbd_17 removed 00:21:31.565 02:16:40 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_17 removed' 00:21:31.565 02:16:40 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:21:31.565 02:16:40 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_18 00:21:31.565 02:16:40 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_18 00:21:31.824 [2024-07-23 02:16:40.384773] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (64779e35-7935-49e9-a2e8-78d7bbcd1da0) received event(SPDK_BDEV_EVENT_REMOVE) 00:21:31.824 INFO: lvol bdev lvs0/lbd_18 removed 00:21:31.824 02:16:40 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_18 removed' 00:21:31.824 02:16:40 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:21:31.824 02:16:40 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_19 00:21:31.824 02:16:40 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_19 00:21:32.082 [2024-07-23 02:16:40.632871] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (07e6a6c3-e1c0-49c9-9cd8-7b30092508a3) received event(SPDK_BDEV_EVENT_REMOVE) 00:21:32.082 INFO: lvol bdev lvs0/lbd_19 removed 00:21:32.082 02:16:40 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_19 removed' 00:21:32.082 02:16:40 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:21:32.082 02:16:40 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_20 00:21:32.082 02:16:40 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_20 00:21:32.082 [2024-07-23 02:16:40.844970] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (74038742-7ef3-4800-a199-8d668d891765) received event(SPDK_BDEV_EVENT_REMOVE) 00:21:32.340 INFO: lvol bdev lvs0/lbd_20 removed 00:21:32.340 02:16:40 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_20 removed' 00:21:32.340 02:16:40 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:21:32.340 02:16:40 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_21 00:21:32.340 02:16:40 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_21 00:21:32.340 [2024-07-23 02:16:41.041082] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (314a21e9-2396-45d8-a407-e4c56e1f70b3) received event(SPDK_BDEV_EVENT_REMOVE) 00:21:32.340 INFO: lvol bdev lvs0/lbd_21 removed 00:21:32.340 02:16:41 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_21 removed' 00:21:32.340 02:16:41 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:21:32.340 02:16:41 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_22 00:21:32.340 02:16:41 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_22 00:21:32.599 [2024-07-23 02:16:41.233134] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (b5f88e30-1b68-4025-8cfd-3a183456c0cf) received event(SPDK_BDEV_EVENT_REMOVE) 00:21:32.599 INFO: lvol bdev lvs0/lbd_22 removed 00:21:32.599 02:16:41 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_22 removed' 00:21:32.599 02:16:41 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:21:32.599 02:16:41 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_23 00:21:32.599 02:16:41 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_23 00:21:32.857 [2024-07-23 02:16:41.413214] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (73cd8c1b-8319-4fe7-824f-96f71264abcf) received event(SPDK_BDEV_EVENT_REMOVE) 00:21:32.857 INFO: lvol bdev lvs0/lbd_23 removed 00:21:32.857 02:16:41 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_23 removed' 00:21:32.857 02:16:41 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:21:32.857 02:16:41 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_24 00:21:32.857 02:16:41 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_24 00:21:32.857 [2024-07-23 02:16:41.617330] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (a769b013-27aa-440f-9e67-9f119dba8e5b) received event(SPDK_BDEV_EVENT_REMOVE) 00:21:33.115 INFO: lvol bdev lvs0/lbd_24 removed 00:21:33.115 02:16:41 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_24 removed' 00:21:33.115 02:16:41 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:21:33.115 02:16:41 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_25 00:21:33.115 02:16:41 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_25 00:21:33.115 [2024-07-23 02:16:41.817397] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (c8ba18fa-dd17-44b7-be5d-f987a6e4f38a) received event(SPDK_BDEV_EVENT_REMOVE) 00:21:33.115 INFO: lvol bdev lvs0/lbd_25 removed 00:21:33.115 02:16:41 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_25 removed' 00:21:33.115 02:16:41 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:21:33.115 02:16:41 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_26 00:21:33.115 02:16:41 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_26 00:21:33.373 [2024-07-23 02:16:42.013454] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (3fd8384b-02d6-4ccb-a77a-5461eb05b7dc) received event(SPDK_BDEV_EVENT_REMOVE) 00:21:33.373 INFO: lvol bdev lvs0/lbd_26 removed 00:21:33.373 02:16:42 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_26 removed' 00:21:33.373 02:16:42 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:21:33.373 02:16:42 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_27 00:21:33.373 02:16:42 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_27 00:21:33.632 [2024-07-23 02:16:42.205559] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (7faac784-30f5-47ee-bef1-07e910a23f01) received event(SPDK_BDEV_EVENT_REMOVE) 00:21:33.632 INFO: lvol bdev lvs0/lbd_27 removed 00:21:33.632 02:16:42 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_27 removed' 00:21:33.632 02:16:42 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:21:33.632 02:16:42 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_28 00:21:33.632 02:16:42 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_28 00:21:33.632 [2024-07-23 02:16:42.405647] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (81678814-f14e-43ef-ae74-4fe9f7e2b565) received event(SPDK_BDEV_EVENT_REMOVE) 00:21:33.892 INFO: lvol bdev lvs0/lbd_28 removed 00:21:33.892 02:16:42 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_28 removed' 00:21:33.892 02:16:42 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:21:33.892 02:16:42 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_29 00:21:33.892 02:16:42 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_29 00:21:33.892 [2024-07-23 02:16:42.609717] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (14b8fb6f-6e90-4daa-b159-bda7aa246ef0) received event(SPDK_BDEV_EVENT_REMOVE) 00:21:33.892 INFO: lvol bdev lvs0/lbd_29 removed 00:21:33.892 02:16:42 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_29 removed' 00:21:33.892 02:16:42 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@23 -- # for i in $(seq 1 $CONNECTION_NUMBER) 00:21:33.892 02:16:42 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@24 -- # lun=lvs0/lbd_30 00:21:33.892 02:16:42 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs0/lbd_30 00:21:34.151 [2024-07-23 02:16:42.789876] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (4153f8ef-01e7-4f39-b7fe-8b6d20214951) received event(SPDK_BDEV_EVENT_REMOVE) 00:21:34.151 INFO: lvol bdev lvs0/lbd_30 removed 00:21:34.151 02:16:42 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@26 -- # echo -e '\tINFO: lvol bdev lvs0/lbd_30 removed' 00:21:34.151 02:16:42 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@28 -- # sleep 1 00:21:35.093 INFO: Removing lvol stores 00:21:35.093 02:16:43 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@30 -- # echo 'INFO: Removing lvol stores' 00:21:35.093 02:16:43 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs0 00:21:35.352 INFO: lvol store lvs0 removed 00:21:35.352 INFO: Removing NVMe 00:21:35.352 02:16:44 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@32 -- # echo 'INFO: lvol store lvs0 removed' 00:21:35.352 02:16:44 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@34 -- # echo 'INFO: Removing NVMe' 00:21:35.352 02:16:44 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:21:37.257 02:16:45 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@37 -- # return 0 00:21:37.257 02:16:45 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@86 -- # killprocess 79838 00:21:37.257 02:16:45 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@948 -- # '[' -z 79838 ']' 00:21:37.257 02:16:45 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@952 -- # kill -0 79838 00:21:37.257 02:16:45 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@953 -- # uname 00:21:37.257 02:16:45 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:37.257 02:16:45 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 79838 00:21:37.257 killing process with pid 79838 00:21:37.257 02:16:45 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:37.257 02:16:45 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:37.257 02:16:45 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@966 -- # echo 'killing process with pid 79838' 00:21:37.257 02:16:45 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@967 -- # kill 79838 00:21:37.257 02:16:45 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@972 -- # wait 79838 00:21:38.635 02:16:47 iscsi_tgt.iscsi_tgt_multiconnection -- multiconnection/multiconnection.sh@87 -- # iscsitestfini 00:21:38.635 02:16:47 iscsi_tgt.iscsi_tgt_multiconnection -- iscsi_tgt/common.sh@131 -- # '[' '' == iso ']' 00:21:38.635 ************************************ 00:21:38.635 END TEST iscsi_tgt_multiconnection 00:21:38.635 ************************************ 00:21:38.635 00:21:38.635 real 0m48.613s 00:21:38.635 user 0m57.260s 00:21:38.635 sys 0m13.150s 00:21:38.635 02:16:47 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:38.635 02:16:47 iscsi_tgt.iscsi_tgt_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:21:38.894 02:16:47 iscsi_tgt -- common/autotest_common.sh@1142 -- # return 0 00:21:38.894 02:16:47 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@46 -- # '[' 1 -eq 1 ']' 00:21:38.894 02:16:47 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@47 -- # run_test iscsi_tgt_ext4test /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/ext4test/ext4test.sh 00:21:38.894 02:16:47 iscsi_tgt -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:21:38.894 02:16:47 iscsi_tgt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:38.894 02:16:47 iscsi_tgt -- common/autotest_common.sh@10 -- # set +x 00:21:38.894 ************************************ 00:21:38.894 START TEST iscsi_tgt_ext4test 00:21:38.894 ************************************ 00:21:38.894 02:16:47 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/ext4test/ext4test.sh 00:21:38.894 * Looking for test storage... 00:21:38.894 * Found test storage at /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/ext4test 00:21:38.894 02:16:47 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:21:38.894 02:16:47 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:21:38.894 02:16:47 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:21:38.894 02:16:47 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:21:38.894 02:16:47 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:21:38.894 02:16:47 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:21:38.894 02:16:47 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:21:38.894 02:16:47 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:21:38.894 02:16:47 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:21:38.894 02:16:47 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:21:38.894 02:16:47 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:21:38.894 02:16:47 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:21:38.894 02:16:47 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:21:38.895 02:16:47 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:21:38.895 02:16:47 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:21:38.895 02:16:47 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:21:38.895 02:16:47 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:21:38.895 02:16:47 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:21:38.895 02:16:47 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:21:38.895 02:16:47 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:21:38.895 02:16:47 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@24 -- # iscsitestinit 00:21:38.895 02:16:47 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@109 -- # '[' '' == iso ']' 00:21:38.895 02:16:47 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@28 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:38.895 02:16:47 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@29 -- # node_base=iqn.2013-06.com.intel.ch.spdk 00:21:38.895 02:16:47 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@31 -- # timing_enter start_iscsi_tgt 00:21:38.895 02:16:47 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:38.895 02:16:47 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@10 -- # set +x 00:21:38.895 02:16:47 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@34 -- # pid=82233 00:21:38.895 02:16:47 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@33 -- # ip netns exec spdk_iscsi_ns /home/vagrant/spdk_repo/spdk/build/bin/iscsi_tgt --wait-for-rpc 00:21:38.895 Process pid: 82233 00:21:38.895 02:16:47 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@35 -- # echo 'Process pid: 82233' 00:21:38.895 02:16:47 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@37 -- # trap cleanup SIGINT SIGTERM EXIT 00:21:38.895 02:16:47 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@39 -- # waitforlisten 82233 00:21:38.895 02:16:47 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@829 -- # '[' -z 82233 ']' 00:21:38.895 02:16:47 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:38.895 02:16:47 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:38.895 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:38.895 02:16:47 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:38.895 02:16:47 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:38.895 02:16:47 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@10 -- # set +x 00:21:39.153 [2024-07-23 02:16:47.689854] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:21:39.153 [2024-07-23 02:16:47.690064] [ DPDK EAL parameters: iscsi --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82233 ] 00:21:39.153 [2024-07-23 02:16:47.852378] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:39.412 [2024-07-23 02:16:48.046433] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:39.979 02:16:48 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:39.979 02:16:48 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@862 -- # return 0 00:21:39.979 02:16:48 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_set_options -o 30 -a 4 -b iqn.2013-06.com.intel.ch.spdk 00:21:40.237 02:16:48 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:21:41.173 02:16:49 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:21:41.173 02:16:49 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:21:41.432 02:16:50 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 512 4096 --name Malloc0 00:21:42.039 Malloc0 00:21:42.039 iscsi_tgt is listening. Running tests... 00:21:42.039 02:16:50 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@44 -- # echo 'iscsi_tgt is listening. Running tests...' 00:21:42.039 02:16:50 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@46 -- # timing_exit start_iscsi_tgt 00:21:42.039 02:16:50 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:42.039 02:16:50 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@10 -- # set +x 00:21:42.297 02:16:50 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_portal_group 1 10.0.0.1:3260 00:21:42.297 02:16:51 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_initiator_group 2 ANY 10.0.0.2/32 00:21:42.556 02:16:51 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_error_create Malloc0 00:21:42.815 true 00:21:42.815 02:16:51 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target0 Target0_alias EE_Malloc0:0 1:2 64 -d 00:21:43.074 02:16:51 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@55 -- # sleep 1 00:21:44.011 02:16:52 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@57 -- # iscsiadm -m discovery -t sendtargets -p 10.0.0.1:3260 00:21:44.011 10.0.0.1:3260,1 iqn.2013-06.com.intel.ch.spdk:Target0 00:21:44.011 02:16:52 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@58 -- # iscsiadm -m node --login -p 10.0.0.1:3260 00:21:44.011 Logging in to [iface: default, target: iqn.2013-06.com.intel.ch.spdk:Target0, portal: 10.0.0.1,3260] 00:21:44.011 Login to [iface: default, target: iqn.2013-06.com.intel.ch.spdk:Target0, portal: 10.0.0.1,3260] successful. 00:21:44.011 02:16:52 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@59 -- # waitforiscsidevices 1 00:21:44.011 02:16:52 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@116 -- # local num=1 00:21:44.011 02:16:52 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@118 -- # (( i = 1 )) 00:21:44.011 02:16:52 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@118 -- # (( i <= 20 )) 00:21:44.011 02:16:52 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@119 -- # iscsiadm -m session -P 3 00:21:44.011 02:16:52 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@119 -- # grep -c 'Attached scsi disk sd[a-z]*' 00:21:44.011 [2024-07-23 02:16:52.694245] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:44.011 02:16:52 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@119 -- # n=1 00:21:44.011 02:16:52 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@120 -- # '[' 1 -ne 1 ']' 00:21:44.011 02:16:52 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@123 -- # return 0 00:21:44.011 Test error injection 00:21:44.011 02:16:52 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@61 -- # echo 'Test error injection' 00:21:44.011 02:16:52 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_error_inject_error EE_Malloc0 all failure -n 1000 00:21:44.269 02:16:52 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@64 -- # iscsiadm -m session -P 3 00:21:44.269 02:16:52 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@64 -- # head -n1 00:21:44.269 02:16:52 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@64 -- # grep 'Attached scsi disk' 00:21:44.269 02:16:52 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@64 -- # awk '{print $4}' 00:21:44.269 02:16:52 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@64 -- # dev=sda 00:21:44.269 02:16:52 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@65 -- # waitforfile /dev/sda 00:21:44.269 02:16:52 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@1265 -- # local i=0 00:21:44.269 02:16:52 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@1266 -- # '[' '!' -e /dev/sda ']' 00:21:44.269 02:16:52 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@1272 -- # '[' '!' -e /dev/sda ']' 00:21:44.269 02:16:52 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@1276 -- # return 0 00:21:44.269 02:16:52 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@66 -- # make_filesystem ext4 /dev/sda 00:21:44.269 02:16:52 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@924 -- # local fstype=ext4 00:21:44.269 02:16:52 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@925 -- # local dev_name=/dev/sda 00:21:44.269 02:16:52 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@926 -- # local i=0 00:21:44.269 02:16:52 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@927 -- # local force 00:21:44.269 02:16:52 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:21:44.269 02:16:52 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@930 -- # force=-F 00:21:44.269 02:16:52 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/sda 00:21:44.269 mke2fs 1.46.5 (30-Dec-2021) 00:21:44.786 Discarding device blocks: 0/131072 done 00:21:44.786 Creating filesystem with 131072 4k blocks and 32768 inodes 00:21:44.786 Filesystem UUID: 506e5311-b9c4-4d3d-b0c7-405fd6bb253b 00:21:44.786 Superblock backups stored on blocks: 00:21:44.786 32768, 98304 00:21:44.786 00:21:44.786 Allocating group tables: 0/4 done 00:21:44.786 Warning: could not erase sector 2: Input/output error 00:21:45.044 Warning: could not read block 0: Input/output error 00:21:45.044 Warning: could not erase sector 0: Input/output error 00:21:45.044 Writing inode tables: 0/4 done 00:21:45.044 ext2fs_write_inode_full: Input/output error while writing reserved inodes 00:21:45.044 02:16:53 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@936 -- # '[' 0 -ge 15 ']' 00:21:45.044 02:16:53 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@939 -- # i=1 00:21:45.044 02:16:53 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@940 -- # sleep 1 00:21:45.044 [2024-07-23 02:16:53.772343] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:46.421 02:16:54 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/sda 00:21:46.421 mke2fs 1.46.5 (30-Dec-2021) 00:21:46.421 Discarding device blocks: 0/131072 done 00:21:46.421 Warning: could not erase sector 2: Input/output error 00:21:46.421 Creating filesystem with 131072 4k blocks and 32768 inodes 00:21:46.421 Filesystem UUID: 5e29881c-97f9-4137-b823-efdecfc47f3b 00:21:46.421 Superblock backups stored on blocks: 00:21:46.421 32768, 98304 00:21:46.421 00:21:46.421 Allocating group tables: 0/4 done 00:21:46.680 Warning: could not read block 0: Input/output error 00:21:46.680 Warning: could not erase sector 0: Input/output error 00:21:46.680 Writing inode tables: 0/4 done 00:21:46.680 ext2fs_write_inode_full: Input/output error while writing reserved inodes 00:21:46.680 02:16:55 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@936 -- # '[' 1 -ge 15 ']' 00:21:46.680 02:16:55 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@939 -- # i=2 00:21:46.680 [2024-07-23 02:16:55.364019] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:46.680 02:16:55 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@940 -- # sleep 1 00:21:47.615 02:16:56 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/sda 00:21:47.615 mke2fs 1.46.5 (30-Dec-2021) 00:21:47.874 Discarding device blocks: 0/131072 done 00:21:48.131 Warning: could not erase sector 2: Input/output error 00:21:48.131 Creating filesystem with 131072 4k blocks and 32768 inodes 00:21:48.131 Filesystem UUID: ceea6bfe-521f-4aed-96dd-58c29beff4e3 00:21:48.131 Superblock backups stored on blocks: 00:21:48.131 32768, 98304 00:21:48.131 00:21:48.131 Allocating group tables: 0/4 done 00:21:48.131 Warning: could not read block 0: Input/output error 00:21:48.131 Warning: could not erase sector 0: Input/output error 00:21:48.131 Writing inode tables: 0/4 done 00:21:48.389 ext2fs_write_inode_full: Input/output error while writing reserved inodes 00:21:48.389 02:16:56 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@936 -- # '[' 2 -ge 15 ']' 00:21:48.389 02:16:56 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@939 -- # i=3 00:21:48.389 02:16:56 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@940 -- # sleep 1 00:21:48.389 [2024-07-23 02:16:56.957759] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:49.324 02:16:57 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/sda 00:21:49.324 mke2fs 1.46.5 (30-Dec-2021) 00:21:49.583 Discarding device blocks: 0/131072 done 00:21:49.842 Creating filesystem with 131072 4k blocks and 32768 inodes 00:21:49.842 Filesystem UUID: d641cf3d-8584-448d-8dcb-bc7641dd59e1 00:21:49.842 Superblock backups stored on blocks: 00:21:49.842 32768, 98304 00:21:49.842 00:21:49.842 Allocating group tables: 0/4Warning: could not erase sector 2: Input/output error 00:21:49.842 done 00:21:49.842 Warning: could not read block 0: Input/output error 00:21:49.842 Warning: could not erase sector 0: Input/output error 00:21:49.842 Writing inode tables: 0/4 done 00:21:50.100 ext2fs_write_inode_full: Input/output error while writing reserved inodes 00:21:50.100 02:16:58 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@936 -- # '[' 3 -ge 15 ']' 00:21:50.100 02:16:58 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@939 -- # i=4 00:21:50.100 02:16:58 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@940 -- # sleep 1 00:21:50.100 [2024-07-23 02:16:58.652144] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:51.035 02:16:59 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/sda 00:21:51.035 mke2fs 1.46.5 (30-Dec-2021) 00:21:51.294 Discarding device blocks: 0/131072 done 00:21:51.294 Creating filesystem with 131072 4k blocks and 32768 inodes 00:21:51.294 Filesystem UUID: 09609be5-24eb-495e-9545-b1021d6d4590 00:21:51.294 Superblock backups stored on blocks: 00:21:51.294 32768, 98304 00:21:51.294 00:21:51.294 Allocating group tables: 0/4 done 00:21:51.294 Warning: could not erase sector 2: Input/output error 00:21:51.552 Warning: could not read block 0: Input/output error 00:21:51.552 Warning: could not erase sector 0: Input/output error 00:21:51.552 Writing inode tables: 0/4 done 00:21:51.552 ext2fs_write_inode_full: Input/output error while writing reserved inodes 00:21:51.552 02:17:00 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@936 -- # '[' 4 -ge 15 ']' 00:21:51.552 02:17:00 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@939 -- # i=5 00:21:51.552 02:17:00 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@940 -- # sleep 1 00:21:51.552 [2024-07-23 02:17:00.244467] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:52.486 02:17:01 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/sda 00:21:52.486 mke2fs 1.46.5 (30-Dec-2021) 00:21:52.745 Discarding device blocks: 0/131072 done 00:21:53.004 Creating filesystem with 131072 4k blocks and 32768 inodes 00:21:53.004 Filesystem UUID: cb85e221-2452-4d60-9074-6af142a7cc19 00:21:53.004 Superblock backups stored on blocks: 00:21:53.004 32768, 98304 00:21:53.004 00:21:53.004 Allocating group tables: 0/4 done 00:21:53.004 Warning: could not erase sector 2: Input/output error 00:21:53.004 Warning: could not read block 0: Input/output error 00:21:53.004 Warning: could not erase sector 0: Input/output error 00:21:53.004 Writing inode tables: 0/4 done 00:21:53.263 ext2fs_write_inode_full: Input/output error while writing reserved inodes 00:21:53.263 02:17:01 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@936 -- # '[' 5 -ge 15 ']' 00:21:53.263 02:17:01 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@939 -- # i=6 00:21:53.263 02:17:01 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@940 -- # sleep 1 00:21:53.263 [2024-07-23 02:17:01.835540] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:54.198 02:17:02 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/sda 00:21:54.198 mke2fs 1.46.5 (30-Dec-2021) 00:21:54.457 Discarding device blocks: 0/131072 done 00:21:54.457 Creating filesystem with 131072 4k blocks and 32768 inodes 00:21:54.457 Filesystem UUID: f5449b88-87f8-47d5-89b2-bad02bb246d4 00:21:54.457 Superblock backups stored on blocks: 00:21:54.457 32768, 98304 00:21:54.457 00:21:54.457 Allocating group tables: 0/4 done 00:21:54.457 Warning: could not erase sector 2: Input/output error 00:21:54.719 Warning: could not read block 0: Input/output error 00:21:54.719 Warning: could not erase sector 0: Input/output error 00:21:54.719 Writing inode tables: 0/4 done 00:21:54.979 ext2fs_write_inode_full: Input/output error while writing reserved inodes 00:21:54.979 02:17:03 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@936 -- # '[' 6 -ge 15 ']' 00:21:54.979 02:17:03 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@939 -- # i=7 00:21:54.979 02:17:03 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@940 -- # sleep 1 00:21:55.914 02:17:04 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/sda 00:21:55.914 mke2fs 1.46.5 (30-Dec-2021) 00:21:56.173 Discarding device blocks: 0/131072 done 00:21:56.173 Creating filesystem with 131072 4k blocks and 32768 inodes 00:21:56.173 Filesystem UUID: cd51a890-8ae7-478d-ac11-9656d6684bcb 00:21:56.173 Superblock backups stored on blocks: 00:21:56.173 32768, 98304 00:21:56.173 Warning: could not erase sector 2: Input/output error 00:21:56.173 00:21:56.173 Allocating group tables: 0/4 done 00:21:56.173 Warning: could not read block 0: Input/output error 00:21:56.432 Warning: could not erase sector 0: Input/output error 00:21:56.432 Writing inode tables: 0/4 done 00:21:56.432 ext2fs_write_inode_full: Input/output error while writing reserved inodes 00:21:56.432 02:17:05 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@936 -- # '[' 7 -ge 15 ']' 00:21:56.432 02:17:05 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@939 -- # i=8 00:21:56.432 02:17:05 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@940 -- # sleep 1 00:21:56.432 [2024-07-23 02:17:05.113554] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:57.386 02:17:06 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/sda 00:21:57.386 mke2fs 1.46.5 (30-Dec-2021) 00:21:57.658 Discarding device blocks: 0/131072 done 00:21:57.918 Creating filesystem with 131072 4k blocks and 32768 inodes 00:21:57.918 Filesystem UUID: 773c10f9-aaf1-441e-965e-e7a12ecfdf35 00:21:57.918 Superblock backups stored on blocks: 00:21:57.918 32768, Warning: could not erase sector 2: Input/output error 00:21:57.918 98304 00:21:57.918 00:21:57.918 Allocating group tables: 0/4 done 00:21:57.918 Warning: could not read block 0: Input/output error 00:21:57.918 Warning: could not erase sector 0: Input/output error 00:21:57.918 Writing inode tables: 0/4 done 00:21:58.176 ext2fs_write_inode_full: Input/output error while writing reserved inodes 00:21:58.176 02:17:06 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@936 -- # '[' 8 -ge 15 ']' 00:21:58.176 02:17:06 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@939 -- # i=9 00:21:58.176 02:17:06 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@940 -- # sleep 1 00:21:58.176 [2024-07-23 02:17:06.704607] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:59.112 02:17:07 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/sda 00:21:59.112 mke2fs 1.46.5 (30-Dec-2021) 00:21:59.371 Discarding device blocks: 0/131072 done 00:21:59.371 Creating filesystem with 131072 4k blocks and 32768 inodes 00:21:59.371 Filesystem UUID: 08ec98f9-c5e2-4205-999f-9a44e1c4ea7e 00:21:59.371 Superblock backups stored on blocks: 00:21:59.371 32768, 98304 00:21:59.371 00:21:59.371 Allocating group tables: 0/4 done 00:21:59.371 Warning: could not erase sector 2: Input/output error 00:21:59.371 Warning: could not read block 0: Input/output error 00:21:59.630 Writing inode tables: 0/4 done 00:21:59.630 Creating journal (4096 blocks): done 00:21:59.630 Writing superblocks and filesystem accounting information: 0/4 mkfs.ext4: Input/output error while writing out and closing file system 00:21:59.630 02:17:08 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@936 -- # '[' 9 -ge 15 ']' 00:21:59.630 [2024-07-23 02:17:08.301176] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:21:59.630 02:17:08 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@939 -- # i=10 00:21:59.630 02:17:08 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@940 -- # sleep 1 00:22:00.567 02:17:09 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/sda 00:22:00.567 mke2fs 1.46.5 (30-Dec-2021) 00:22:00.826 Discarding device blocks: 0/131072 done 00:22:00.826 Creating filesystem with 131072 4k blocks and 32768 inodes 00:22:00.826 Filesystem UUID: f23e5dee-5b30-4478-9c00-1241060fe8da 00:22:00.826 Superblock backups stored on blocks: 00:22:00.826 32768, 98304 00:22:00.826 00:22:00.826 Allocating group tables: 0/4 done 00:22:00.826 Writing inode tables: 0/4 done 00:22:00.826 Creating journal (4096 blocks): done 00:22:01.085 Writing superblocks and filesystem accounting information: 0/4 mkfs.ext4: Input/output error while writing out and closing file system 00:22:01.085 02:17:09 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@936 -- # '[' 10 -ge 15 ']' 00:22:01.085 02:17:09 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@939 -- # i=11 00:22:01.085 [2024-07-23 02:17:09.622836] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:22:01.085 02:17:09 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@940 -- # sleep 1 00:22:02.021 02:17:10 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/sda 00:22:02.021 mke2fs 1.46.5 (30-Dec-2021) 00:22:02.280 Discarding device blocks: 0/131072 done 00:22:02.280 Creating filesystem with 131072 4k blocks and 32768 inodes 00:22:02.280 Filesystem UUID: ee498bae-a80e-4047-af17-25eaa92d7ca6 00:22:02.280 Superblock backups stored on blocks: 00:22:02.280 32768, 98304 00:22:02.280 00:22:02.280 Allocating group tables: 0/4 done 00:22:02.280 Writing inode tables: 0/4 done 00:22:02.280 Creating journal (4096 blocks): done 00:22:02.280 Writing superblocks and filesystem accounting information: 0/4 mkfs.ext4: Input/output error while writing out and closing file system 00:22:02.280 [2024-07-23 02:17:10.948076] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:22:02.280 02:17:10 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@936 -- # '[' 11 -ge 15 ']' 00:22:02.280 02:17:10 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@939 -- # i=12 00:22:02.280 02:17:10 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@940 -- # sleep 1 00:22:03.215 02:17:11 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/sda 00:22:03.215 mke2fs 1.46.5 (30-Dec-2021) 00:22:03.475 Discarding device blocks: 0/131072 done 00:22:03.475 Creating filesystem with 131072 4k blocks and 32768 inodes 00:22:03.475 Filesystem UUID: 0b86d520-cce6-48cb-94a5-e7ad670782e0 00:22:03.475 Superblock backups stored on blocks: 00:22:03.475 32768, 98304 00:22:03.475 00:22:03.475 Allocating group tables: 0/4 done 00:22:03.475 Writing inode tables: 0/4 done 00:22:03.475 Creating journal (4096 blocks): done 00:22:03.733 Writing superblocks and filesystem accounting information: 0/4 mkfs.ext4: Input/output error while writing out and closing file system 00:22:03.734 [2024-07-23 02:17:12.264132] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:22:03.734 02:17:12 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@936 -- # '[' 12 -ge 15 ']' 00:22:03.734 02:17:12 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@939 -- # i=13 00:22:03.734 02:17:12 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@940 -- # sleep 1 00:22:04.671 02:17:13 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/sda 00:22:04.671 mke2fs 1.46.5 (30-Dec-2021) 00:22:04.931 Discarding device blocks: 0/131072 done 00:22:04.931 Creating filesystem with 131072 4k blocks and 32768 inodes 00:22:04.931 Filesystem UUID: e32479e4-d171-4a5b-8449-c587c27d261f 00:22:04.931 Superblock backups stored on blocks: 00:22:04.931 32768, 98304 00:22:04.931 00:22:04.931 Allocating group tables: 0/4 done 00:22:04.931 Writing inode tables: 0/4 done 00:22:04.931 Creating journal (4096 blocks): done 00:22:04.931 Writing superblocks and filesystem accounting information: 0/4 mkfs.ext4: Input/output error while writing out and closing file system 00:22:04.931 [2024-07-23 02:17:13.590028] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:22:04.931 02:17:13 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@936 -- # '[' 13 -ge 15 ']' 00:22:04.931 02:17:13 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@939 -- # i=14 00:22:04.931 02:17:13 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@940 -- # sleep 1 00:22:05.868 02:17:14 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/sda 00:22:05.868 mke2fs 1.46.5 (30-Dec-2021) 00:22:06.127 Discarding device blocks: 0/131072 done 00:22:06.127 Creating filesystem with 131072 4k blocks and 32768 inodes 00:22:06.127 Filesystem UUID: 663f45ab-a437-4446-83fa-83ed98fb04f5 00:22:06.127 Superblock backups stored on blocks: 00:22:06.127 32768, 98304 00:22:06.127 00:22:06.127 Allocating group tables: 0/4 done 00:22:06.127 Writing inode tables: 0/4 done 00:22:06.127 Creating journal (4096 blocks): done 00:22:06.385 Writing superblocks and filesystem accounting information: 0/4 mkfs.ext4: Input/output error while writing out and closing file system 00:22:06.385 02:17:14 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@936 -- # '[' 14 -ge 15 ']' 00:22:06.385 02:17:14 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@939 -- # i=15 00:22:06.385 02:17:14 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@940 -- # sleep 1 00:22:06.385 [2024-07-23 02:17:14.919685] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:22:07.322 02:17:15 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/sda 00:22:07.322 mke2fs 1.46.5 (30-Dec-2021) 00:22:07.581 Discarding device blocks: 0/131072 done 00:22:07.581 Creating filesystem with 131072 4k blocks and 32768 inodes 00:22:07.581 Filesystem UUID: 31b2f539-945a-4e9a-9daa-5039c4a3c526 00:22:07.581 Superblock backups stored on blocks: 00:22:07.581 32768, 98304 00:22:07.581 00:22:07.581 Allocating group tables: 0/4 done 00:22:07.581 Writing inode tables: 0/4 done 00:22:07.581 Creating journal (4096 blocks): done 00:22:07.581 Writing superblocks and filesystem accounting information: 0/4 mkfs.ext4: Input/output error while writing out and closing file system 00:22:07.581 mkfs failed as expected 00:22:07.581 Cleaning up iSCSI connection 00:22:07.581 02:17:16 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@936 -- # '[' 15 -ge 15 ']' 00:22:07.581 [2024-07-23 02:17:16.247290] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:22:07.581 02:17:16 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@937 -- # return 1 00:22:07.581 02:17:16 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@70 -- # echo 'mkfs failed as expected' 00:22:07.581 02:17:16 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@73 -- # iscsicleanup 00:22:07.581 02:17:16 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@980 -- # echo 'Cleaning up iSCSI connection' 00:22:07.581 02:17:16 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@981 -- # iscsiadm -m node --logout 00:22:07.581 Logging out of session [sid: 71, target: iqn.2013-06.com.intel.ch.spdk:Target0, portal: 10.0.0.1,3260] 00:22:07.581 Logout of [sid: 71, target: iqn.2013-06.com.intel.ch.spdk:Target0, portal: 10.0.0.1,3260] successful. 00:22:07.581 02:17:16 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@982 -- # iscsiadm -m node -o delete 00:22:07.581 02:17:16 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@983 -- # rm -rf 00:22:07.581 02:17:16 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@74 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_error_inject_error EE_Malloc0 clear failure 00:22:07.839 02:17:16 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_delete_target_node iqn.2013-06.com.intel.ch.spdk:Target0 00:22:08.406 Error injection test done 00:22:08.406 02:17:16 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@76 -- # echo 'Error injection test done' 00:22:08.406 02:17:16 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@78 -- # get_bdev_size Nvme0n1 00:22:08.406 02:17:16 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@1378 -- # local bdev_name=Nvme0n1 00:22:08.406 02:17:16 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@1379 -- # local bdev_info 00:22:08.406 02:17:16 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@1380 -- # local bs 00:22:08.406 02:17:16 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@1381 -- # local nb 00:22:08.406 02:17:16 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b Nvme0n1 00:22:08.406 02:17:17 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:22:08.406 { 00:22:08.406 "name": "Nvme0n1", 00:22:08.406 "aliases": [ 00:22:08.406 "d8fb91b7-3d6d-4b99-aad8-b1fbe0a4c013" 00:22:08.406 ], 00:22:08.406 "product_name": "NVMe disk", 00:22:08.406 "block_size": 4096, 00:22:08.406 "num_blocks": 1310720, 00:22:08.406 "uuid": "d8fb91b7-3d6d-4b99-aad8-b1fbe0a4c013", 00:22:08.406 "assigned_rate_limits": { 00:22:08.406 "rw_ios_per_sec": 0, 00:22:08.406 "rw_mbytes_per_sec": 0, 00:22:08.406 "r_mbytes_per_sec": 0, 00:22:08.406 "w_mbytes_per_sec": 0 00:22:08.406 }, 00:22:08.406 "claimed": false, 00:22:08.406 "zoned": false, 00:22:08.406 "supported_io_types": { 00:22:08.406 "read": true, 00:22:08.406 "write": true, 00:22:08.406 "unmap": true, 00:22:08.406 "flush": true, 00:22:08.406 "reset": true, 00:22:08.406 "nvme_admin": true, 00:22:08.406 "nvme_io": true, 00:22:08.406 "nvme_io_md": false, 00:22:08.406 "write_zeroes": true, 00:22:08.406 "zcopy": false, 00:22:08.406 "get_zone_info": false, 00:22:08.406 "zone_management": false, 00:22:08.406 "zone_append": false, 00:22:08.406 "compare": true, 00:22:08.406 "compare_and_write": false, 00:22:08.406 "abort": true, 00:22:08.406 "seek_hole": false, 00:22:08.406 "seek_data": false, 00:22:08.406 "copy": true, 00:22:08.406 "nvme_iov_md": false 00:22:08.406 }, 00:22:08.406 "driver_specific": { 00:22:08.406 "nvme": [ 00:22:08.406 { 00:22:08.406 "pci_address": "0000:00:10.0", 00:22:08.406 "trid": { 00:22:08.406 "trtype": "PCIe", 00:22:08.406 "traddr": "0000:00:10.0" 00:22:08.406 }, 00:22:08.406 "ctrlr_data": { 00:22:08.406 "cntlid": 0, 00:22:08.406 "vendor_id": "0x1b36", 00:22:08.406 "model_number": "QEMU NVMe Ctrl", 00:22:08.406 "serial_number": "12340", 00:22:08.406 "firmware_revision": "8.0.0", 00:22:08.406 "subnqn": "nqn.2019-08.org.qemu:12340", 00:22:08.406 "oacs": { 00:22:08.406 "security": 0, 00:22:08.406 "format": 1, 00:22:08.406 "firmware": 0, 00:22:08.406 "ns_manage": 1 00:22:08.406 }, 00:22:08.406 "multi_ctrlr": false, 00:22:08.406 "ana_reporting": false 00:22:08.406 }, 00:22:08.406 "vs": { 00:22:08.406 "nvme_version": "1.4" 00:22:08.406 }, 00:22:08.406 "ns_data": { 00:22:08.406 "id": 1, 00:22:08.406 "can_share": false 00:22:08.406 } 00:22:08.406 } 00:22:08.406 ], 00:22:08.406 "mp_policy": "active_passive" 00:22:08.406 } 00:22:08.406 } 00:22:08.406 ]' 00:22:08.406 02:17:17 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:22:08.664 02:17:17 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@1383 -- # bs=4096 00:22:08.664 02:17:17 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:22:08.664 02:17:17 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@1384 -- # nb=1310720 00:22:08.664 02:17:17 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:22:08.664 02:17:17 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@1388 -- # echo 5120 00:22:08.664 02:17:17 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@78 -- # bdev_size=5120 00:22:08.664 02:17:17 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@79 -- # split_size=2560 00:22:08.664 02:17:17 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@80 -- # split_size=2560 00:22:08.664 02:17:17 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create Nvme0n1 2 -s 2560 00:22:08.923 Nvme0n1p0 Nvme0n1p1 00:22:08.923 02:17:17 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py iscsi_create_target_node Target1 Target1_alias Nvme0n1p0:0 1:2 64 -d 00:22:09.182 02:17:17 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@84 -- # iscsiadm -m discovery -t sendtargets -p 10.0.0.1:3260 00:22:09.182 10.0.0.1:3260,1 iqn.2013-06.com.intel.ch.spdk:Target1 00:22:09.182 02:17:17 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@85 -- # iscsiadm -m node --login -p 10.0.0.1:3260 00:22:09.182 Logging in to [iface: default, target: iqn.2013-06.com.intel.ch.spdk:Target1, portal: 10.0.0.1,3260] 00:22:09.182 Login to [iface: default, target: iqn.2013-06.com.intel.ch.spdk:Target1, portal: 10.0.0.1,3260] successful. 00:22:09.182 02:17:17 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@86 -- # waitforiscsidevices 1 00:22:09.182 02:17:17 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@116 -- # local num=1 00:22:09.182 02:17:17 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@118 -- # (( i = 1 )) 00:22:09.182 02:17:17 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@118 -- # (( i <= 20 )) 00:22:09.182 02:17:17 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@119 -- # grep -c 'Attached scsi disk sd[a-z]*' 00:22:09.182 02:17:17 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@119 -- # iscsiadm -m session -P 3 00:22:09.182 [2024-07-23 02:17:17.815817] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:22:09.182 02:17:17 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@119 -- # n=1 00:22:09.182 02:17:17 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@120 -- # '[' 1 -ne 1 ']' 00:22:09.182 02:17:17 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@123 -- # return 0 00:22:09.182 02:17:17 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@88 -- # iscsiadm -m session -P 3 00:22:09.182 02:17:17 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@88 -- # grep 'Attached scsi disk' 00:22:09.182 02:17:17 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@88 -- # head -n1 00:22:09.182 02:17:17 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@88 -- # awk '{print $4}' 00:22:09.182 02:17:17 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@88 -- # dev=sda 00:22:09.182 02:17:17 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@89 -- # waitforfile /dev/sda 00:22:09.182 02:17:17 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@1265 -- # local i=0 00:22:09.182 02:17:17 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@1266 -- # '[' '!' -e /dev/sda ']' 00:22:09.182 02:17:17 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@1272 -- # '[' '!' -e /dev/sda ']' 00:22:09.182 02:17:17 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@1276 -- # return 0 00:22:09.182 02:17:17 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@91 -- # make_filesystem ext4 /dev/sda 00:22:09.182 02:17:17 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@924 -- # local fstype=ext4 00:22:09.182 02:17:17 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@925 -- # local dev_name=/dev/sda 00:22:09.182 02:17:17 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@926 -- # local i=0 00:22:09.182 02:17:17 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@927 -- # local force 00:22:09.182 02:17:17 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:22:09.182 02:17:17 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@930 -- # force=-F 00:22:09.182 02:17:17 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/sda 00:22:09.182 mke2fs 1.46.5 (30-Dec-2021) 00:22:09.182 Discarding device blocks: 0/655360 done 00:22:09.182 Creating filesystem with 655360 4k blocks and 163840 inodes 00:22:09.182 Filesystem UUID: 31058508-1190-488a-bd17-9232df79ea5c 00:22:09.182 Superblock backups stored on blocks: 00:22:09.182 32768, 98304, 163840, 229376, 294912 00:22:09.182 00:22:09.182 Allocating group tables: 0/20 done 00:22:09.182 Writing inode tables: 0/20 done 00:22:09.749 Creating journal (16384 blocks): done 00:22:09.749 Writing superblocks and filesystem accounting information: 0/20 done 00:22:09.749 00:22:09.749 [2024-07-23 02:17:18.284666] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:22:09.749 02:17:18 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@943 -- # return 0 00:22:09.749 02:17:18 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@92 -- # mkdir -p /mnt/sdadir 00:22:09.749 02:17:18 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@93 -- # mount -o sync /dev/sda /mnt/sdadir 00:22:09.749 02:17:18 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@95 -- # rsync -qav --exclude=.git '--exclude=*.o' /home/vagrant/spdk_repo/spdk/ /mnt/sdadir/spdk 00:24:01.225 02:19:03 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@97 -- # make -C /mnt/sdadir/spdk clean 00:24:01.225 make: Entering directory '/mnt/sdadir/spdk' 00:24:47.899 make[1]: Nothing to be done for 'clean'. 00:24:47.899 make: Leaving directory '/mnt/sdadir/spdk' 00:24:47.899 02:19:55 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@98 -- # cd /mnt/sdadir/spdk 00:24:47.899 02:19:55 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@98 -- # ./configure --disable-unit-tests --disable-tests 00:24:47.899 Using default SPDK env in /mnt/sdadir/spdk/lib/env_dpdk 00:24:47.899 Using default DPDK in /mnt/sdadir/spdk/dpdk/build 00:25:09.825 Configuring ISA-L (logfile: /mnt/sdadir/spdk/.spdk-isal.log)...done. 00:25:36.363 Configuring ISA-L-crypto (logfile: /mnt/sdadir/spdk/.spdk-isal-crypto.log)...done. 00:25:36.363 Creating mk/config.mk...done. 00:25:36.363 Creating mk/cc.flags.mk...done. 00:25:36.363 Type 'make' to build. 00:25:36.363 02:20:43 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@99 -- # make -C /mnt/sdadir/spdk -j 00:25:36.363 make: Entering directory '/mnt/sdadir/spdk' 00:25:36.363 make[1]: Nothing to be done for 'all'. 00:26:08.473 The Meson build system 00:26:08.473 Version: 1.3.1 00:26:08.473 Source dir: /mnt/sdadir/spdk/dpdk 00:26:08.473 Build dir: /mnt/sdadir/spdk/dpdk/build-tmp 00:26:08.473 Build type: native build 00:26:08.473 Program cat found: YES (/usr/bin/cat) 00:26:08.473 Project name: DPDK 00:26:08.473 Project version: 24.03.0 00:26:08.473 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:26:08.473 C linker for the host machine: cc ld.bfd 2.39-16 00:26:08.473 Host machine cpu family: x86_64 00:26:08.473 Host machine cpu: x86_64 00:26:08.473 Program pkg-config found: YES (/usr/bin/pkg-config) 00:26:08.473 Program check-symbols.sh found: YES (/mnt/sdadir/spdk/dpdk/buildtools/check-symbols.sh) 00:26:08.473 Program options-ibverbs-static.sh found: YES (/mnt/sdadir/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:26:08.473 Program python3 found: YES (/usr/bin/python3) 00:26:08.473 Program cat found: YES (/usr/bin/cat) 00:26:08.473 Compiler for C supports arguments -march=native: YES 00:26:08.473 Checking for size of "void *" : 8 00:26:08.473 Checking for size of "void *" : 8 (cached) 00:26:08.473 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:26:08.473 Library m found: YES 00:26:08.473 Library numa found: YES 00:26:08.473 Has header "numaif.h" : YES 00:26:08.473 Library fdt found: NO 00:26:08.473 Library execinfo found: NO 00:26:08.473 Has header "execinfo.h" : YES 00:26:08.473 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:26:08.473 Run-time dependency libarchive found: NO (tried pkgconfig) 00:26:08.473 Run-time dependency libbsd found: NO (tried pkgconfig) 00:26:08.473 Run-time dependency jansson found: NO (tried pkgconfig) 00:26:08.473 Run-time dependency openssl found: YES 3.0.9 00:26:08.473 Run-time dependency libpcap found: YES 1.10.4 00:26:08.473 Has header "pcap.h" with dependency libpcap: YES 00:26:08.473 Compiler for C supports arguments -Wcast-qual: YES 00:26:08.473 Compiler for C supports arguments -Wdeprecated: YES 00:26:08.473 Compiler for C supports arguments -Wformat: YES 00:26:08.473 Compiler for C supports arguments -Wformat-nonliteral: YES 00:26:08.473 Compiler for C supports arguments -Wformat-security: YES 00:26:08.473 Compiler for C supports arguments -Wmissing-declarations: YES 00:26:08.473 Compiler for C supports arguments -Wmissing-prototypes: YES 00:26:08.473 Compiler for C supports arguments -Wnested-externs: YES 00:26:08.473 Compiler for C supports arguments -Wold-style-definition: YES 00:26:08.473 Compiler for C supports arguments -Wpointer-arith: YES 00:26:08.473 Compiler for C supports arguments -Wsign-compare: YES 00:26:08.473 Compiler for C supports arguments -Wstrict-prototypes: YES 00:26:08.473 Compiler for C supports arguments -Wundef: YES 00:26:08.473 Compiler for C supports arguments -Wwrite-strings: YES 00:26:08.473 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:26:08.473 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:26:08.473 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:26:08.473 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:26:08.473 Program objdump found: YES (/usr/bin/objdump) 00:26:08.473 Compiler for C supports arguments -mavx512f: YES 00:26:08.473 Checking if "AVX512 checking" compiles: YES 00:26:08.473 Fetching value of define "__SSE4_2__" : 1 00:26:08.473 Fetching value of define "__AES__" : 1 00:26:08.473 Fetching value of define "__AVX__" : 1 00:26:08.473 Fetching value of define "__AVX2__" : 1 00:26:08.473 Fetching value of define "__AVX512BW__" : (undefined) 00:26:08.473 Fetching value of define "__AVX512CD__" : (undefined) 00:26:08.473 Fetching value of define "__AVX512DQ__" : (undefined) 00:26:08.473 Fetching value of define "__AVX512F__" : (undefined) 00:26:08.473 Fetching value of define "__AVX512VL__" : (undefined) 00:26:08.473 Fetching value of define "__PCLMUL__" : 1 00:26:08.473 Fetching value of define "__RDRND__" : 1 00:26:08.473 Fetching value of define "__RDSEED__" : 1 00:26:08.473 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:26:08.473 Fetching value of define "__znver1__" : (undefined) 00:26:08.473 Fetching value of define "__znver2__" : (undefined) 00:26:08.473 Fetching value of define "__znver3__" : (undefined) 00:26:08.473 Fetching value of define "__znver4__" : (undefined) 00:26:08.473 Compiler for C supports arguments -Wno-format-truncation: YES 00:26:08.473 Checking for function "getentropy" : NO 00:26:08.473 Fetching value of define "__PCLMUL__" : 1 (cached) 00:26:08.473 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:26:08.473 Compiler for C supports arguments -mpclmul: YES 00:26:08.473 Compiler for C supports arguments -maes: YES 00:26:08.473 Compiler for C supports arguments -mavx512f: YES (cached) 00:26:08.473 Compiler for C supports arguments -mavx512bw: YES 00:26:08.473 Compiler for C supports arguments -mavx512dq: YES 00:26:08.473 Compiler for C supports arguments -mavx512vl: YES 00:26:08.473 Compiler for C supports arguments -mvpclmulqdq: YES 00:26:08.473 Compiler for C supports arguments -mavx2: YES 00:26:08.473 Compiler for C supports arguments -mavx: YES 00:26:08.473 Compiler for C supports arguments -Wno-cast-qual: YES 00:26:08.473 Has header "linux/userfaultfd.h" : YES 00:26:08.473 Has header "linux/vduse.h" : YES 00:26:08.473 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:26:08.473 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:26:08.473 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:26:08.473 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:26:08.473 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:26:08.473 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:26:08.473 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:26:08.473 Program doxygen found: YES (/usr/bin/doxygen) 00:26:08.473 Configuring doxy-api-html.conf using configuration 00:26:08.473 Configuring doxy-api-man.conf using configuration 00:26:08.473 Program mandb found: YES (/usr/bin/mandb) 00:26:08.473 Program sphinx-build found: NO 00:26:08.473 Configuring rte_build_config.h using configuration 00:26:08.473 Message: 00:26:08.473 ================= 00:26:08.473 Applications Enabled 00:26:08.473 ================= 00:26:08.473 00:26:08.473 apps: 00:26:08.473 00:26:08.473 00:26:08.473 Message: 00:26:08.473 ================= 00:26:08.473 Libraries Enabled 00:26:08.473 ================= 00:26:08.473 00:26:08.473 libs: 00:26:08.473 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:26:08.473 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:26:08.473 cryptodev, dmadev, power, reorder, security, vhost, 00:26:08.473 00:26:08.473 Message: 00:26:08.473 =============== 00:26:08.473 Drivers Enabled 00:26:08.473 =============== 00:26:08.473 00:26:08.473 common: 00:26:08.473 00:26:08.473 bus: 00:26:08.473 pci, vdev, 00:26:08.473 mempool: 00:26:08.473 ring, 00:26:08.473 dma: 00:26:08.473 00:26:08.473 net: 00:26:08.473 00:26:08.473 crypto: 00:26:08.473 00:26:08.473 compress: 00:26:08.473 00:26:08.473 vdpa: 00:26:08.473 00:26:08.473 00:26:08.474 Message: 00:26:08.474 ================= 00:26:08.474 Content Skipped 00:26:08.474 ================= 00:26:08.474 00:26:08.474 apps: 00:26:08.474 dumpcap: explicitly disabled via build config 00:26:08.474 graph: explicitly disabled via build config 00:26:08.474 pdump: explicitly disabled via build config 00:26:08.474 proc-info: explicitly disabled via build config 00:26:08.474 test-acl: explicitly disabled via build config 00:26:08.474 test-bbdev: explicitly disabled via build config 00:26:08.474 test-cmdline: explicitly disabled via build config 00:26:08.474 test-compress-perf: explicitly disabled via build config 00:26:08.474 test-crypto-perf: explicitly disabled via build config 00:26:08.474 test-dma-perf: explicitly disabled via build config 00:26:08.474 test-eventdev: explicitly disabled via build config 00:26:08.474 test-fib: explicitly disabled via build config 00:26:08.474 test-flow-perf: explicitly disabled via build config 00:26:08.474 test-gpudev: explicitly disabled via build config 00:26:08.474 test-mldev: explicitly disabled via build config 00:26:08.474 test-pipeline: explicitly disabled via build config 00:26:08.474 test-pmd: explicitly disabled via build config 00:26:08.474 test-regex: explicitly disabled via build config 00:26:08.474 test-sad: explicitly disabled via build config 00:26:08.474 test-security-perf: explicitly disabled via build config 00:26:08.474 00:26:08.474 libs: 00:26:08.474 argparse: explicitly disabled via build config 00:26:08.474 metrics: explicitly disabled via build config 00:26:08.474 acl: explicitly disabled via build config 00:26:08.474 bbdev: explicitly disabled via build config 00:26:08.474 bitratestats: explicitly disabled via build config 00:26:08.474 bpf: explicitly disabled via build config 00:26:08.474 cfgfile: explicitly disabled via build config 00:26:08.474 distributor: explicitly disabled via build config 00:26:08.474 efd: explicitly disabled via build config 00:26:08.474 eventdev: explicitly disabled via build config 00:26:08.474 dispatcher: explicitly disabled via build config 00:26:08.474 gpudev: explicitly disabled via build config 00:26:08.474 gro: explicitly disabled via build config 00:26:08.474 gso: explicitly disabled via build config 00:26:08.474 ip_frag: explicitly disabled via build config 00:26:08.474 jobstats: explicitly disabled via build config 00:26:08.474 latencystats: explicitly disabled via build config 00:26:08.474 lpm: explicitly disabled via build config 00:26:08.474 member: explicitly disabled via build config 00:26:08.474 pcapng: explicitly disabled via build config 00:26:08.474 rawdev: explicitly disabled via build config 00:26:08.474 regexdev: explicitly disabled via build config 00:26:08.474 mldev: explicitly disabled via build config 00:26:08.474 rib: explicitly disabled via build config 00:26:08.474 sched: explicitly disabled via build config 00:26:08.474 stack: explicitly disabled via build config 00:26:08.474 ipsec: explicitly disabled via build config 00:26:08.474 pdcp: explicitly disabled via build config 00:26:08.474 fib: explicitly disabled via build config 00:26:08.474 port: explicitly disabled via build config 00:26:08.474 pdump: explicitly disabled via build config 00:26:08.474 table: explicitly disabled via build config 00:26:08.474 pipeline: explicitly disabled via build config 00:26:08.474 graph: explicitly disabled via build config 00:26:08.474 node: explicitly disabled via build config 00:26:08.474 00:26:08.474 drivers: 00:26:08.474 common/cpt: not in enabled drivers build config 00:26:08.474 common/dpaax: not in enabled drivers build config 00:26:08.474 common/iavf: not in enabled drivers build config 00:26:08.474 common/idpf: not in enabled drivers build config 00:26:08.474 common/ionic: not in enabled drivers build config 00:26:08.474 common/mvep: not in enabled drivers build config 00:26:08.474 common/octeontx: not in enabled drivers build config 00:26:08.474 bus/auxiliary: not in enabled drivers build config 00:26:08.474 bus/cdx: not in enabled drivers build config 00:26:08.474 bus/dpaa: not in enabled drivers build config 00:26:08.474 bus/fslmc: not in enabled drivers build config 00:26:08.474 bus/ifpga: not in enabled drivers build config 00:26:08.474 bus/platform: not in enabled drivers build config 00:26:08.474 bus/uacce: not in enabled drivers build config 00:26:08.474 bus/vmbus: not in enabled drivers build config 00:26:08.474 common/cnxk: not in enabled drivers build config 00:26:08.474 common/mlx5: not in enabled drivers build config 00:26:08.474 common/nfp: not in enabled drivers build config 00:26:08.474 common/nitrox: not in enabled drivers build config 00:26:08.474 common/qat: not in enabled drivers build config 00:26:08.474 common/sfc_efx: not in enabled drivers build config 00:26:08.474 mempool/bucket: not in enabled drivers build config 00:26:08.474 mempool/cnxk: not in enabled drivers build config 00:26:08.474 mempool/dpaa: not in enabled drivers build config 00:26:08.474 mempool/dpaa2: not in enabled drivers build config 00:26:08.474 mempool/octeontx: not in enabled drivers build config 00:26:08.474 mempool/stack: not in enabled drivers build config 00:26:08.474 dma/cnxk: not in enabled drivers build config 00:26:08.474 dma/dpaa: not in enabled drivers build config 00:26:08.474 dma/dpaa2: not in enabled drivers build config 00:26:08.474 dma/hisilicon: not in enabled drivers build config 00:26:08.474 dma/idxd: not in enabled drivers build config 00:26:08.474 dma/ioat: not in enabled drivers build config 00:26:08.474 dma/skeleton: not in enabled drivers build config 00:26:08.474 net/af_packet: not in enabled drivers build config 00:26:08.474 net/af_xdp: not in enabled drivers build config 00:26:08.474 net/ark: not in enabled drivers build config 00:26:08.474 net/atlantic: not in enabled drivers build config 00:26:08.474 net/avp: not in enabled drivers build config 00:26:08.474 net/axgbe: not in enabled drivers build config 00:26:08.474 net/bnx2x: not in enabled drivers build config 00:26:08.474 net/bnxt: not in enabled drivers build config 00:26:08.474 net/bonding: not in enabled drivers build config 00:26:08.474 net/cnxk: not in enabled drivers build config 00:26:08.474 net/cpfl: not in enabled drivers build config 00:26:08.474 net/cxgbe: not in enabled drivers build config 00:26:08.474 net/dpaa: not in enabled drivers build config 00:26:08.474 net/dpaa2: not in enabled drivers build config 00:26:08.474 net/e1000: not in enabled drivers build config 00:26:08.474 net/ena: not in enabled drivers build config 00:26:08.474 net/enetc: not in enabled drivers build config 00:26:08.474 net/enetfec: not in enabled drivers build config 00:26:08.474 net/enic: not in enabled drivers build config 00:26:08.474 net/failsafe: not in enabled drivers build config 00:26:08.474 net/fm10k: not in enabled drivers build config 00:26:08.474 net/gve: not in enabled drivers build config 00:26:08.474 net/hinic: not in enabled drivers build config 00:26:08.474 net/hns3: not in enabled drivers build config 00:26:08.474 net/i40e: not in enabled drivers build config 00:26:08.474 net/iavf: not in enabled drivers build config 00:26:08.474 net/ice: not in enabled drivers build config 00:26:08.474 net/idpf: not in enabled drivers build config 00:26:08.474 net/igc: not in enabled drivers build config 00:26:08.474 net/ionic: not in enabled drivers build config 00:26:08.474 net/ipn3ke: not in enabled drivers build config 00:26:08.474 net/ixgbe: not in enabled drivers build config 00:26:08.474 net/mana: not in enabled drivers build config 00:26:08.474 net/memif: not in enabled drivers build config 00:26:08.474 net/mlx4: not in enabled drivers build config 00:26:08.474 net/mlx5: not in enabled drivers build config 00:26:08.474 net/mvneta: not in enabled drivers build config 00:26:08.474 net/mvpp2: not in enabled drivers build config 00:26:08.474 net/netvsc: not in enabled drivers build config 00:26:08.474 net/nfb: not in enabled drivers build config 00:26:08.474 net/nfp: not in enabled drivers build config 00:26:08.474 net/ngbe: not in enabled drivers build config 00:26:08.474 net/null: not in enabled drivers build config 00:26:08.474 net/octeontx: not in enabled drivers build config 00:26:08.474 net/octeon_ep: not in enabled drivers build config 00:26:08.474 net/pcap: not in enabled drivers build config 00:26:08.474 net/pfe: not in enabled drivers build config 00:26:08.474 net/qede: not in enabled drivers build config 00:26:08.474 net/ring: not in enabled drivers build config 00:26:08.474 net/sfc: not in enabled drivers build config 00:26:08.474 net/softnic: not in enabled drivers build config 00:26:08.474 net/tap: not in enabled drivers build config 00:26:08.474 net/thunderx: not in enabled drivers build config 00:26:08.474 net/txgbe: not in enabled drivers build config 00:26:08.474 net/vdev_netvsc: not in enabled drivers build config 00:26:08.474 net/vhost: not in enabled drivers build config 00:26:08.474 net/virtio: not in enabled drivers build config 00:26:08.474 net/vmxnet3: not in enabled drivers build config 00:26:08.474 raw/*: missing internal dependency, "rawdev" 00:26:08.474 crypto/armv8: not in enabled drivers build config 00:26:08.474 crypto/bcmfs: not in enabled drivers build config 00:26:08.474 crypto/caam_jr: not in enabled drivers build config 00:26:08.474 crypto/ccp: not in enabled drivers build config 00:26:08.474 crypto/cnxk: not in enabled drivers build config 00:26:08.474 crypto/dpaa_sec: not in enabled drivers build config 00:26:08.474 crypto/dpaa2_sec: not in enabled drivers build config 00:26:08.474 crypto/ipsec_mb: not in enabled drivers build config 00:26:08.474 crypto/mlx5: not in enabled drivers build config 00:26:08.474 crypto/mvsam: not in enabled drivers build config 00:26:08.474 crypto/nitrox: not in enabled drivers build config 00:26:08.474 crypto/null: not in enabled drivers build config 00:26:08.474 crypto/octeontx: not in enabled drivers build config 00:26:08.474 crypto/openssl: not in enabled drivers build config 00:26:08.474 crypto/scheduler: not in enabled drivers build config 00:26:08.474 crypto/uadk: not in enabled drivers build config 00:26:08.474 crypto/virtio: not in enabled drivers build config 00:26:08.474 compress/isal: not in enabled drivers build config 00:26:08.474 compress/mlx5: not in enabled drivers build config 00:26:08.474 compress/nitrox: not in enabled drivers build config 00:26:08.475 compress/octeontx: not in enabled drivers build config 00:26:08.475 compress/zlib: not in enabled drivers build config 00:26:08.475 regex/*: missing internal dependency, "regexdev" 00:26:08.475 ml/*: missing internal dependency, "mldev" 00:26:08.475 vdpa/ifc: not in enabled drivers build config 00:26:08.475 vdpa/mlx5: not in enabled drivers build config 00:26:08.475 vdpa/nfp: not in enabled drivers build config 00:26:08.475 vdpa/sfc: not in enabled drivers build config 00:26:08.475 event/*: missing internal dependency, "eventdev" 00:26:08.475 baseband/*: missing internal dependency, "bbdev" 00:26:08.475 gpu/*: missing internal dependency, "gpudev" 00:26:08.475 00:26:08.475 00:26:08.475 Build targets in project: 61 00:26:08.475 00:26:08.475 DPDK 24.03.0 00:26:08.475 00:26:08.475 User defined options 00:26:08.475 default_library : static 00:26:08.475 libdir : lib 00:26:08.475 prefix : /mnt/sdadir/spdk/dpdk/build 00:26:08.475 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Wno-error 00:26:08.475 c_link_args : 00:26:08.475 cpu_instruction_set: native 00:26:08.475 disable_apps : test-gpudev,proc-info,test-pipeline,test-mldev,pdump,dumpcap,test-eventdev,test-dma-perf,test-security-perf,test-bbdev,test,test-sad,test-pmd,test-acl,test-compress-perf,graph,test-flow-perf,test-fib,test-crypto-perf,test-regex,test-cmdline 00:26:08.475 disable_libs : latencystats,eventdev,pdump,jobstats,cfgfile,member,regexdev,gso,acl,rib,rawdev,distributor,argparse,lpm,port,bbdev,node,stack,ip_frag,metrics,pdcp,ipsec,gro,fib,gpudev,bitratestats,efd,table,bpf,pcapng,graph,pipeline,mldev,dispatcher,sched 00:26:08.475 enable_docs : false 00:26:08.475 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:26:08.475 enable_kmods : false 00:26:08.475 max_lcores : 128 00:26:08.475 tests : false 00:26:08.475 00:26:08.475 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:26:08.475 ninja: Entering directory `/mnt/sdadir/spdk/dpdk/build-tmp' 00:26:08.475 [1/244] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:26:08.475 [2/244] Compiling C object lib/librte_log.a.p/log_log.c.o 00:26:08.475 [3/244] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:26:08.475 [4/244] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:26:08.475 [5/244] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:26:08.475 [6/244] Linking static target lib/librte_log.a 00:26:08.475 [7/244] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:26:08.475 [8/244] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:26:08.475 [9/244] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:26:08.475 [10/244] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:26:08.475 [11/244] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:26:08.475 [12/244] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:26:08.475 [13/244] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:26:08.475 [14/244] Linking target lib/librte_log.so.24.1 00:26:08.475 [15/244] Linking static target lib/librte_kvargs.a 00:26:08.475 [16/244] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:26:08.475 [17/244] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:26:08.475 [18/244] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:26:08.475 [19/244] Linking static target lib/librte_telemetry.a 00:26:08.475 [20/244] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:26:08.475 [21/244] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:26:08.475 [22/244] Linking target lib/librte_kvargs.so.24.1 00:26:08.475 [23/244] Linking target lib/librte_telemetry.so.24.1 00:26:08.475 [24/244] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:26:08.475 [25/244] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:26:08.475 [26/244] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:26:08.475 [27/244] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:26:08.734 [28/244] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:26:08.734 [29/244] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:26:08.734 [30/244] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:26:08.734 [31/244] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:26:08.996 [32/244] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:26:08.996 [33/244] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:26:08.996 [34/244] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:26:08.996 [35/244] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:26:08.996 [36/244] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:26:08.996 [37/244] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:26:09.256 [38/244] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:26:09.256 [39/244] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:26:09.256 [40/244] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:26:09.256 [41/244] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:26:09.256 [42/244] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:26:09.514 [43/244] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:26:09.514 [44/244] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:26:09.773 [45/244] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:26:09.773 [46/244] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:26:09.773 [47/244] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:26:09.773 [48/244] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:26:10.032 [49/244] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:26:10.032 [50/244] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:26:10.032 [51/244] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:26:10.032 [52/244] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:26:10.291 [53/244] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:26:10.291 [54/244] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:26:10.291 [55/244] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:26:10.291 [56/244] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:26:10.291 [57/244] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:26:10.550 [58/244] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:26:10.550 [59/244] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:26:10.550 [60/244] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:26:10.550 [61/244] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:26:10.550 [62/244] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:26:10.809 [63/244] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:26:10.809 [64/244] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:26:10.809 [65/244] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:26:11.068 [66/244] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:26:11.068 [67/244] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:26:11.068 [68/244] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:26:11.068 [69/244] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:26:11.068 [70/244] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:26:11.326 [71/244] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:26:11.326 [72/244] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:26:11.326 [73/244] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:26:11.326 [74/244] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:26:11.585 [75/244] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:26:11.585 [76/244] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:26:11.585 [77/244] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:26:11.844 [78/244] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:26:11.844 [79/244] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:26:11.844 [80/244] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:26:12.103 [81/244] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:26:12.103 [82/244] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:26:12.103 [83/244] Linking static target lib/librte_ring.a 00:26:12.103 [84/244] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:26:12.362 [85/244] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:26:12.362 [86/244] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:26:12.362 [87/244] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:26:12.362 [88/244] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:26:12.621 [89/244] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:26:12.621 [90/244] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:26:12.621 [91/244] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:26:12.621 [92/244] Linking static target lib/net/libnet_crc_avx512_lib.a 00:26:12.621 [93/244] Linking static target lib/librte_mempool.a 00:26:12.880 [94/244] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:26:12.880 [95/244] Linking static target lib/librte_rcu.a 00:26:13.139 [96/244] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:26:13.139 [97/244] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:26:13.139 [98/244] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:26:13.139 [99/244] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:26:13.139 [100/244] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:26:13.139 [101/244] Linking static target lib/librte_mbuf.a 00:26:13.398 [102/244] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:26:13.398 [103/244] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:26:13.398 [104/244] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:26:13.657 [105/244] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:26:13.657 [106/244] Linking static target lib/librte_net.a 00:26:13.657 [107/244] Linking static target lib/librte_meter.a 00:26:13.657 [108/244] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:26:13.657 [109/244] Linking static target lib/librte_eal.a 00:26:13.916 [110/244] Linking target lib/librte_eal.so.24.1 00:26:13.916 [111/244] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:26:14.175 [112/244] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:26:14.175 [113/244] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:26:14.175 [114/244] Linking target lib/librte_ring.so.24.1 00:26:14.434 [115/244] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:26:14.434 [116/244] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:26:14.434 [117/244] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:26:14.692 [118/244] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:26:14.692 [119/244] Linking target lib/librte_meter.so.24.1 00:26:14.692 [120/244] Linking target lib/librte_rcu.so.24.1 00:26:14.692 [121/244] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:26:14.951 [122/244] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:26:14.951 [123/244] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:26:14.951 [124/244] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:26:14.951 [125/244] Linking target lib/librte_mempool.so.24.1 00:26:14.951 [126/244] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:26:14.951 [127/244] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:26:14.951 [128/244] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:26:14.951 [129/244] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:26:14.951 [130/244] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:26:15.210 [131/244] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:26:15.210 [132/244] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:26:15.210 [133/244] Linking static target lib/librte_pci.a 00:26:15.210 [134/244] Linking target lib/librte_pci.so.24.1 00:26:15.210 [135/244] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:26:15.210 [136/244] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:26:15.210 [137/244] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:26:15.210 [138/244] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:26:15.210 [139/244] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:26:15.210 [140/244] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:26:15.210 [141/244] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:26:15.468 [142/244] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:26:15.469 [143/244] Linking target lib/librte_mbuf.so.24.1 00:26:15.469 [144/244] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:26:15.469 [145/244] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:26:15.469 [146/244] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:26:15.469 [147/244] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:26:15.469 [148/244] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:26:15.469 [149/244] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:26:15.727 [150/244] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:26:15.727 [151/244] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:26:15.727 [152/244] Linking target lib/librte_net.so.24.1 00:26:15.727 [153/244] Linking static target lib/librte_cmdline.a 00:26:15.986 [154/244] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:26:15.986 [155/244] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:26:15.986 [156/244] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:26:16.245 [157/244] Linking target lib/librte_cmdline.so.24.1 00:26:16.504 [158/244] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:26:16.504 [159/244] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:26:16.504 [160/244] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:26:16.504 [161/244] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:26:16.763 [162/244] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:26:16.763 [163/244] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:26:16.763 [164/244] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:26:16.763 [165/244] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:26:16.763 [166/244] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:26:17.022 [167/244] Linking static target lib/librte_timer.a 00:26:17.022 [168/244] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:26:17.022 [169/244] Linking target lib/librte_timer.so.24.1 00:26:17.022 [170/244] Linking static target lib/librte_compressdev.a 00:26:17.022 [171/244] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:26:17.022 [172/244] Linking target lib/librte_compressdev.so.24.1 00:26:17.281 [173/244] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:26:17.281 [174/244] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:26:17.281 [175/244] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:26:17.281 [176/244] Linking static target lib/librte_dmadev.a 00:26:17.540 [177/244] Linking target lib/librte_dmadev.so.24.1 00:26:17.540 [178/244] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:26:17.540 [179/244] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:26:17.801 [180/244] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:26:17.801 [181/244] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:26:17.801 [182/244] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:26:17.801 [183/244] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:26:17.801 [184/244] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:26:17.801 [185/244] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:26:18.060 [186/244] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:26:18.060 [187/244] Linking static target lib/librte_hash.a 00:26:18.060 [188/244] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:26:18.060 [189/244] Linking target lib/librte_hash.so.24.1 00:26:18.060 [190/244] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:26:18.319 [191/244] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:26:18.319 [192/244] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:26:18.319 [193/244] Linking static target lib/librte_cryptodev.a 00:26:18.319 [194/244] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:26:18.319 [195/244] Linking target lib/librte_cryptodev.so.24.1 00:26:18.319 [196/244] Linking static target lib/librte_reorder.a 00:26:18.319 [197/244] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:26:18.578 [198/244] Linking target lib/librte_ethdev.so.24.1 00:26:18.578 [199/244] Linking static target lib/librte_security.a 00:26:18.578 [200/244] Linking target lib/librte_reorder.so.24.1 00:26:18.578 [201/244] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:26:18.578 [202/244] Linking static target lib/librte_power.a 00:26:18.578 [203/244] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:26:18.578 [204/244] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:26:18.837 [205/244] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:26:18.837 [206/244] Linking target lib/librte_security.so.24.1 00:26:18.837 [207/244] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:26:18.837 [208/244] Linking target lib/librte_power.so.24.1 00:26:19.094 [209/244] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:26:19.094 [210/244] Linking static target lib/librte_ethdev.a 00:26:19.352 [211/244] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:26:19.610 [212/244] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:26:19.610 [213/244] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:26:19.610 [214/244] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:26:19.610 [215/244] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:26:19.868 [216/244] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:26:19.868 [217/244] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:26:19.868 [218/244] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:26:19.868 [219/244] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:26:20.126 [220/244] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:26:20.126 [221/244] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:26:20.126 [222/244] Linking static target drivers/libtmp_rte_bus_vdev.a 00:26:20.126 [223/244] Linking static target drivers/libtmp_rte_bus_pci.a 00:26:20.384 [224/244] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:26:20.385 [225/244] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:26:20.385 [226/244] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:26:20.648 [227/244] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:26:20.648 [228/244] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:26:20.648 [229/244] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:26:20.649 [230/244] Linking static target drivers/librte_bus_vdev.a 00:26:20.649 [231/244] Linking target drivers/librte_bus_vdev.so.24.1 00:26:20.649 [232/244] Linking static target drivers/librte_bus_pci.a 00:26:20.909 [233/244] Linking target drivers/librte_bus_pci.so.24.1 00:26:20.909 [234/244] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:26:20.909 [235/244] Linking static target drivers/libtmp_rte_mempool_ring.a 00:26:21.168 [236/244] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:26:21.441 [237/244] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:26:21.441 [238/244] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:26:21.441 [239/244] Linking static target drivers/librte_mempool_ring.a 00:26:21.441 [240/244] Linking target drivers/librte_mempool_ring.so.24.1 00:26:22.402 [241/244] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:26:30.518 [242/244] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:26:30.518 [243/244] Linking target lib/librte_vhost.so.24.1 00:26:30.518 [244/244] Linking static target lib/librte_vhost.a 00:26:30.518 INFO: autodetecting backend as ninja 00:26:30.519 INFO: calculating backend command to run: /usr/local/bin/ninja -C /mnt/sdadir/spdk/dpdk/build-tmp 00:26:37.081 CC lib/log/log.o 00:26:37.081 CC lib/log/log_deprecated.o 00:26:37.081 CC lib/log/log_flags.o 00:26:37.081 CC lib/ut_mock/mock.o 00:26:37.339 LIB libspdk_ut_mock.a 00:26:37.339 LIB libspdk_log.a 00:26:37.906 CC lib/dma/dma.o 00:26:37.906 CXX lib/trace_parser/trace.o 00:26:37.906 CC lib/util/base64.o 00:26:37.906 CC lib/util/bit_array.o 00:26:37.906 CC lib/util/cpuset.o 00:26:37.906 CC lib/util/crc16.o 00:26:37.906 CC lib/util/crc32.o 00:26:37.906 CC lib/ioat/ioat.o 00:26:37.906 CC lib/util/crc32c.o 00:26:37.906 CC lib/util/crc32_ieee.o 00:26:37.906 CC lib/util/crc64.o 00:26:37.906 CC lib/util/dif.o 00:26:37.906 CC lib/util/fd.o 00:26:37.906 CC lib/util/fd_group.o 00:26:37.906 CC lib/util/file.o 00:26:37.906 CC lib/util/iov.o 00:26:37.906 CC lib/util/hexlify.o 00:26:37.906 CC lib/util/math.o 00:26:37.906 CC lib/util/net.o 00:26:37.906 CC lib/util/pipe.o 00:26:37.906 CC lib/util/strerror_tls.o 00:26:37.906 CC lib/util/string.o 00:26:37.906 CC lib/util/uuid.o 00:26:37.906 CC lib/util/xor.o 00:26:37.906 CC lib/util/zipf.o 00:26:38.165 CC lib/vfio_user/host/vfio_user_pci.o 00:26:38.165 CC lib/vfio_user/host/vfio_user.o 00:26:38.424 LIB libspdk_dma.a 00:26:38.682 LIB libspdk_ioat.a 00:26:38.682 LIB libspdk_vfio_user.a 00:26:39.249 LIB libspdk_trace_parser.a 00:26:39.249 LIB libspdk_util.a 00:26:40.183 CC lib/vmd/vmd.o 00:26:40.183 CC lib/vmd/led.o 00:26:40.183 CC lib/json/json_parse.o 00:26:40.183 CC lib/json/json_util.o 00:26:40.183 CC lib/conf/conf.o 00:26:40.183 CC lib/json/json_write.o 00:26:40.183 CC lib/env_dpdk/env.o 00:26:40.183 CC lib/env_dpdk/memory.o 00:26:40.183 CC lib/env_dpdk/pci.o 00:26:40.183 CC lib/env_dpdk/init.o 00:26:40.183 CC lib/env_dpdk/threads.o 00:26:40.183 CC lib/env_dpdk/pci_ioat.o 00:26:40.183 CC lib/env_dpdk/pci_virtio.o 00:26:40.183 CC lib/env_dpdk/pci_vmd.o 00:26:40.183 CC lib/env_dpdk/pci_idxd.o 00:26:40.183 CC lib/env_dpdk/pci_event.o 00:26:40.183 CC lib/env_dpdk/sigbus_handler.o 00:26:40.183 CC lib/env_dpdk/pci_dpdk.o 00:26:40.183 CC lib/env_dpdk/pci_dpdk_2207.o 00:26:40.183 CC lib/env_dpdk/pci_dpdk_2211.o 00:26:40.749 LIB libspdk_conf.a 00:26:41.317 LIB libspdk_json.a 00:26:41.317 LIB libspdk_vmd.a 00:26:41.885 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:26:41.885 CC lib/jsonrpc/jsonrpc_server.o 00:26:41.885 CC lib/jsonrpc/jsonrpc_client.o 00:26:41.885 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:26:41.885 LIB libspdk_env_dpdk.a 00:26:42.143 LIB libspdk_jsonrpc.a 00:26:43.078 CC lib/rpc/rpc.o 00:26:43.336 LIB libspdk_rpc.a 00:26:43.594 CC lib/keyring/keyring.o 00:26:43.853 CC lib/keyring/keyring_rpc.o 00:26:43.853 CC lib/trace/trace.o 00:26:43.853 CC lib/trace/trace_flags.o 00:26:43.853 CC lib/trace/trace_rpc.o 00:26:43.853 CC lib/notify/notify_rpc.o 00:26:43.853 CC lib/notify/notify.o 00:26:44.112 LIB libspdk_notify.a 00:26:44.112 LIB libspdk_keyring.a 00:26:44.112 LIB libspdk_trace.a 00:26:44.711 CC lib/sock/sock.o 00:26:44.711 CC lib/sock/sock_rpc.o 00:26:44.711 CC lib/thread/thread.o 00:26:44.711 CC lib/thread/iobuf.o 00:26:45.278 LIB libspdk_sock.a 00:26:45.846 CC lib/nvme/nvme_ctrlr_cmd.o 00:26:45.846 CC lib/nvme/nvme_ctrlr.o 00:26:45.846 CC lib/nvme/nvme_fabric.o 00:26:45.846 CC lib/nvme/nvme_ns_cmd.o 00:26:45.846 CC lib/nvme/nvme_ns.o 00:26:45.846 CC lib/nvme/nvme_qpair.o 00:26:45.846 CC lib/nvme/nvme_pcie_common.o 00:26:45.846 CC lib/nvme/nvme_pcie.o 00:26:45.846 CC lib/nvme/nvme.o 00:26:45.846 CC lib/nvme/nvme_quirks.o 00:26:45.846 CC lib/nvme/nvme_transport.o 00:26:45.846 CC lib/nvme/nvme_discovery.o 00:26:45.846 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:26:45.846 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:26:45.846 CC lib/nvme/nvme_tcp.o 00:26:45.846 CC lib/nvme/nvme_opal.o 00:26:45.846 CC lib/nvme/nvme_io_msg.o 00:26:45.846 CC lib/nvme/nvme_poll_group.o 00:26:45.846 CC lib/nvme/nvme_zns.o 00:26:45.846 CC lib/nvme/nvme_stubs.o 00:26:46.105 CC lib/nvme/nvme_auth.o 00:26:46.105 CC lib/nvme/nvme_cuse.o 00:26:46.363 LIB libspdk_thread.a 00:26:48.267 CC lib/blob/blobstore.o 00:26:48.267 CC lib/accel/accel.o 00:26:48.267 CC lib/virtio/virtio.o 00:26:48.267 CC lib/virtio/virtio_vhost_user.o 00:26:48.267 CC lib/blob/request.o 00:26:48.267 CC lib/blob/zeroes.o 00:26:48.267 CC lib/virtio/virtio_vfio_user.o 00:26:48.267 CC lib/accel/accel_rpc.o 00:26:48.267 CC lib/virtio/virtio_pci.o 00:26:48.267 CC lib/blob/blob_bs_dev.o 00:26:48.267 CC lib/accel/accel_sw.o 00:26:48.267 CC lib/init/json_config.o 00:26:48.267 CC lib/init/subsystem.o 00:26:48.267 CC lib/init/subsystem_rpc.o 00:26:48.267 CC lib/init/rpc.o 00:26:48.834 LIB libspdk_init.a 00:26:48.835 LIB libspdk_virtio.a 00:26:49.403 CC lib/event/app.o 00:26:49.403 CC lib/event/reactor.o 00:26:49.403 CC lib/event/app_rpc.o 00:26:49.403 CC lib/event/scheduler_static.o 00:26:49.403 CC lib/event/log_rpc.o 00:26:49.662 LIB libspdk_accel.a 00:26:49.662 LIB libspdk_nvme.a 00:26:49.922 LIB libspdk_event.a 00:26:50.860 CC lib/bdev/bdev.o 00:26:50.860 CC lib/bdev/bdev_rpc.o 00:26:50.860 CC lib/bdev/bdev_zone.o 00:26:50.860 CC lib/bdev/part.o 00:26:50.860 CC lib/bdev/scsi_nvme.o 00:26:51.797 LIB libspdk_blob.a 00:26:53.171 CC lib/blobfs/tree.o 00:26:53.171 CC lib/blobfs/blobfs.o 00:26:53.171 CC lib/lvol/lvol.o 00:26:54.106 LIB libspdk_bdev.a 00:26:54.365 LIB libspdk_blobfs.a 00:26:54.365 LIB libspdk_lvol.a 00:26:55.743 CC lib/scsi/dev.o 00:26:55.743 CC lib/scsi/lun.o 00:26:55.743 CC lib/scsi/port.o 00:26:55.743 CC lib/nvmf/ctrlr.o 00:26:55.743 CC lib/nvmf/ctrlr_discovery.o 00:26:55.743 CC lib/nvmf/subsystem.o 00:26:55.743 CC lib/nvmf/nvmf.o 00:26:55.743 CC lib/scsi/scsi.o 00:26:55.743 CC lib/nvmf/ctrlr_bdev.o 00:26:55.743 CC lib/scsi/scsi_bdev.o 00:26:55.743 CC lib/nvmf/nvmf_rpc.o 00:26:55.743 CC lib/nvmf/transport.o 00:26:55.743 CC lib/nvmf/tcp.o 00:26:55.743 CC lib/ftl/ftl_core.o 00:26:55.743 CC lib/nvmf/stubs.o 00:26:55.743 CC lib/nbd/nbd.o 00:26:55.743 CC lib/scsi/scsi_pr.o 00:26:55.743 CC lib/nvmf/mdns_server.o 00:26:55.743 CC lib/scsi/scsi_rpc.o 00:26:55.743 CC lib/ftl/ftl_init.o 00:26:55.743 CC lib/ftl/ftl_layout.o 00:26:55.743 CC lib/nbd/nbd_rpc.o 00:26:55.743 CC lib/scsi/task.o 00:26:55.743 CC lib/nvmf/auth.o 00:26:55.743 CC lib/ftl/ftl_debug.o 00:26:55.743 CC lib/ftl/ftl_io.o 00:26:55.743 CC lib/ftl/ftl_sb.o 00:26:55.743 CC lib/ftl/ftl_l2p.o 00:26:55.743 CC lib/ftl/ftl_l2p_flat.o 00:26:55.743 CC lib/ftl/ftl_nv_cache.o 00:26:55.743 CC lib/ftl/ftl_band.o 00:26:55.743 CC lib/ftl/ftl_band_ops.o 00:26:55.743 CC lib/ftl/ftl_writer.o 00:26:55.743 CC lib/ftl/ftl_rq.o 00:26:55.743 CC lib/ftl/ftl_reloc.o 00:26:55.743 CC lib/ftl/ftl_l2p_cache.o 00:26:55.743 CC lib/ftl/ftl_p2l.o 00:26:55.743 CC lib/ftl/mngt/ftl_mngt.o 00:26:55.743 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:26:56.002 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:26:56.002 CC lib/ftl/mngt/ftl_mngt_startup.o 00:26:56.002 CC lib/ftl/mngt/ftl_mngt_md.o 00:26:56.002 CC lib/ftl/mngt/ftl_mngt_misc.o 00:26:56.002 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:26:56.002 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:26:56.002 CC lib/ftl/mngt/ftl_mngt_band.o 00:26:56.002 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:26:56.002 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:26:56.002 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:26:56.002 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:26:56.002 CC lib/ftl/utils/ftl_conf.o 00:26:56.002 CC lib/ftl/utils/ftl_md.o 00:26:56.002 CC lib/ftl/utils/ftl_mempool.o 00:26:56.002 CC lib/ftl/utils/ftl_bitmap.o 00:26:56.002 CC lib/ftl/utils/ftl_property.o 00:26:56.002 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:26:56.002 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:26:56.002 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:26:56.002 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:26:56.002 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:26:56.002 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:26:56.002 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:26:56.002 CC lib/ftl/upgrade/ftl_sb_v3.o 00:26:56.002 CC lib/ftl/upgrade/ftl_sb_v5.o 00:26:56.002 CC lib/ftl/nvc/ftl_nvc_dev.o 00:26:56.261 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:26:56.261 CC lib/ftl/base/ftl_base_dev.o 00:26:56.261 CC lib/ftl/base/ftl_base_bdev.o 00:26:58.165 LIB libspdk_nbd.a 00:26:58.165 LIB libspdk_scsi.a 00:26:58.423 LIB libspdk_ftl.a 00:26:58.683 LIB libspdk_nvmf.a 00:26:58.942 CC lib/iscsi/conn.o 00:26:58.942 CC lib/iscsi/init_grp.o 00:26:58.942 CC lib/iscsi/iscsi.o 00:26:58.942 CC lib/iscsi/md5.o 00:26:58.942 CC lib/iscsi/portal_grp.o 00:26:58.942 CC lib/vhost/vhost_rpc.o 00:26:58.942 CC lib/iscsi/tgt_node.o 00:26:58.942 CC lib/iscsi/param.o 00:26:58.942 CC lib/vhost/vhost.o 00:26:58.942 CC lib/iscsi/iscsi_subsystem.o 00:26:58.942 CC lib/vhost/vhost_scsi.o 00:26:58.942 CC lib/iscsi/task.o 00:26:58.942 CC lib/iscsi/iscsi_rpc.o 00:26:58.942 CC lib/vhost/vhost_blk.o 00:26:58.942 CC lib/vhost/rte_vhost_user.o 00:27:00.845 LIB libspdk_vhost.a 00:27:00.845 LIB libspdk_iscsi.a 00:27:06.116 CC module/env_dpdk/env_dpdk_rpc.o 00:27:06.116 CC module/sock/posix/posix.o 00:27:06.116 CC module/accel/ioat/accel_ioat.o 00:27:06.116 CC module/keyring/linux/keyring.o 00:27:06.116 CC module/keyring/linux/keyring_rpc.o 00:27:06.116 CC module/accel/error/accel_error.o 00:27:06.116 CC module/accel/ioat/accel_ioat_rpc.o 00:27:06.116 CC module/accel/error/accel_error_rpc.o 00:27:06.116 CC module/blob/bdev/blob_bdev.o 00:27:06.116 CC module/keyring/file/keyring.o 00:27:06.116 CC module/keyring/file/keyring_rpc.o 00:27:06.116 CC module/scheduler/dynamic/scheduler_dynamic.o 00:27:06.116 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:27:06.116 CC module/scheduler/gscheduler/gscheduler.o 00:27:06.375 LIB libspdk_env_dpdk_rpc.a 00:27:06.633 LIB libspdk_keyring_linux.a 00:27:06.633 LIB libspdk_scheduler_gscheduler.a 00:27:06.633 LIB libspdk_scheduler_dpdk_governor.a 00:27:06.633 LIB libspdk_keyring_file.a 00:27:06.633 LIB libspdk_accel_ioat.a 00:27:06.633 LIB libspdk_accel_error.a 00:27:06.633 LIB libspdk_blob_bdev.a 00:27:06.633 LIB libspdk_scheduler_dynamic.a 00:27:07.200 LIB libspdk_sock_posix.a 00:27:07.200 CC module/bdev/error/vbdev_error.o 00:27:07.200 CC module/bdev/error/vbdev_error_rpc.o 00:27:07.200 CC module/bdev/gpt/gpt.o 00:27:07.200 CC module/bdev/gpt/vbdev_gpt.o 00:27:07.200 CC module/blobfs/bdev/blobfs_bdev.o 00:27:07.200 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:27:07.200 CC module/bdev/delay/vbdev_delay.o 00:27:07.200 CC module/bdev/delay/vbdev_delay_rpc.o 00:27:07.200 CC module/bdev/passthru/vbdev_passthru.o 00:27:07.200 CC module/bdev/raid/bdev_raid.o 00:27:07.200 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:27:07.200 CC module/bdev/null/bdev_null.o 00:27:07.200 CC module/bdev/aio/bdev_aio.o 00:27:07.200 CC module/bdev/null/bdev_null_rpc.o 00:27:07.200 CC module/bdev/raid/bdev_raid_rpc.o 00:27:07.200 CC module/bdev/aio/bdev_aio_rpc.o 00:27:07.200 CC module/bdev/nvme/bdev_nvme.o 00:27:07.200 CC module/bdev/raid/bdev_raid_sb.o 00:27:07.200 CC module/bdev/malloc/bdev_malloc.o 00:27:07.200 CC module/bdev/nvme/bdev_nvme_rpc.o 00:27:07.200 CC module/bdev/raid/raid0.o 00:27:07.200 CC module/bdev/raid/raid1.o 00:27:07.200 CC module/bdev/nvme/nvme_rpc.o 00:27:07.200 CC module/bdev/nvme/bdev_mdns_client.o 00:27:07.200 CC module/bdev/malloc/bdev_malloc_rpc.o 00:27:07.200 CC module/bdev/raid/concat.o 00:27:07.200 CC module/bdev/virtio/bdev_virtio_scsi.o 00:27:07.200 CC module/bdev/nvme/vbdev_opal.o 00:27:07.200 CC module/bdev/split/vbdev_split.o 00:27:07.200 CC module/bdev/lvol/vbdev_lvol.o 00:27:07.200 CC module/bdev/nvme/vbdev_opal_rpc.o 00:27:07.200 CC module/bdev/virtio/bdev_virtio_blk.o 00:27:07.200 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:27:07.200 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:27:07.200 CC module/bdev/virtio/bdev_virtio_rpc.o 00:27:07.200 CC module/bdev/split/vbdev_split_rpc.o 00:27:07.200 CC module/bdev/zone_block/vbdev_zone_block.o 00:27:07.200 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:27:07.458 CC module/bdev/ftl/bdev_ftl.o 00:27:07.458 CC module/bdev/ftl/bdev_ftl_rpc.o 00:27:08.394 LIB libspdk_blobfs_bdev.a 00:27:08.394 LIB libspdk_bdev_null.a 00:27:08.394 LIB libspdk_bdev_error.a 00:27:08.394 LIB libspdk_bdev_zone_block.a 00:27:08.394 LIB libspdk_bdev_split.a 00:27:08.394 LIB libspdk_bdev_gpt.a 00:27:08.653 LIB libspdk_bdev_passthru.a 00:27:08.653 LIB libspdk_bdev_malloc.a 00:27:08.653 LIB libspdk_bdev_aio.a 00:27:08.653 LIB libspdk_bdev_ftl.a 00:27:08.653 LIB libspdk_bdev_delay.a 00:27:08.653 LIB libspdk_bdev_lvol.a 00:27:08.912 LIB libspdk_bdev_virtio.a 00:27:09.479 LIB libspdk_bdev_raid.a 00:27:10.416 LIB libspdk_bdev_nvme.a 00:27:12.317 CC module/event/subsystems/sock/sock.o 00:27:12.317 CC module/event/subsystems/vmd/vmd.o 00:27:12.317 CC module/event/subsystems/keyring/keyring.o 00:27:12.317 CC module/event/subsystems/vmd/vmd_rpc.o 00:27:12.317 CC module/event/subsystems/iobuf/iobuf.o 00:27:12.317 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:27:12.317 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:27:12.317 CC module/event/subsystems/scheduler/scheduler.o 00:27:12.575 LIB libspdk_event_keyring.a 00:27:12.575 LIB libspdk_event_vhost_blk.a 00:27:12.575 LIB libspdk_event_sock.a 00:27:12.575 LIB libspdk_event_scheduler.a 00:27:12.834 LIB libspdk_event_vmd.a 00:27:12.834 LIB libspdk_event_iobuf.a 00:27:13.402 CC module/event/subsystems/accel/accel.o 00:27:13.660 LIB libspdk_event_accel.a 00:27:13.919 CC module/event/subsystems/bdev/bdev.o 00:27:14.178 LIB libspdk_event_bdev.a 00:27:14.745 CC module/event/subsystems/scsi/scsi.o 00:27:14.745 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:27:14.745 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:27:14.745 CC module/event/subsystems/nbd/nbd.o 00:27:15.004 LIB libspdk_event_scsi.a 00:27:15.004 LIB libspdk_event_nbd.a 00:27:15.262 LIB libspdk_event_nvmf.a 00:27:15.521 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:27:15.521 CC module/event/subsystems/iscsi/iscsi.o 00:27:15.779 LIB libspdk_event_vhost_scsi.a 00:27:15.779 LIB libspdk_event_iscsi.a 00:27:16.038 make[1]: Nothing to be done for 'all'. 00:27:16.296 CXX app/trace/trace.o 00:27:16.296 CC app/trace_record/trace_record.o 00:27:16.296 CC app/spdk_nvme_identify/identify.o 00:27:16.296 CC app/spdk_nvme_perf/perf.o 00:27:16.296 CC app/spdk_lspci/spdk_lspci.o 00:27:16.296 CC app/spdk_nvme_discover/discovery_aer.o 00:27:16.296 CC app/spdk_top/spdk_top.o 00:27:16.296 CC examples/interrupt_tgt/interrupt_tgt.o 00:27:16.296 CC app/iscsi_tgt/iscsi_tgt.o 00:27:16.296 CC app/nvmf_tgt/nvmf_main.o 00:27:16.296 CC app/spdk_dd/spdk_dd.o 00:27:16.296 CC app/spdk_tgt/spdk_tgt.o 00:27:16.555 CC examples/util/zipf/zipf.o 00:27:16.555 CC examples/ioat/verify/verify.o 00:27:16.555 CC examples/ioat/perf/perf.o 00:27:16.814 LINK spdk_lspci 00:27:16.814 LINK nvmf_tgt 00:27:16.814 LINK iscsi_tgt 00:27:16.814 LINK zipf 00:27:16.814 LINK spdk_nvme_discover 00:27:16.814 LINK interrupt_tgt 00:27:16.814 LINK spdk_tgt 00:27:16.814 LINK ioat_perf 00:27:16.814 LINK verify 00:27:16.814 LINK spdk_trace_record 00:27:17.073 LINK spdk_trace 00:27:17.073 LINK spdk_dd 00:27:18.011 LINK spdk_nvme_perf 00:27:18.270 LINK spdk_top 00:27:18.270 LINK spdk_nvme_identify 00:27:19.648 CC app/vhost/vhost.o 00:27:19.648 LINK vhost 00:27:22.963 CC examples/vmd/led/led.o 00:27:22.963 CC examples/vmd/lsvmd/lsvmd.o 00:27:22.963 CC examples/sock/hello_world/hello_sock.o 00:27:22.963 CC examples/thread/thread/thread_ex.o 00:27:22.963 LINK lsvmd 00:27:22.963 LINK led 00:27:23.220 LINK hello_sock 00:27:24.593 LINK thread 00:27:34.566 CC examples/nvme/nvme_manage/nvme_manage.o 00:27:34.566 CC examples/nvme/abort/abort.o 00:27:34.566 CC examples/nvme/reconnect/reconnect.o 00:27:34.566 CC examples/nvme/hotplug/hotplug.o 00:27:34.566 CC examples/nvme/arbitration/arbitration.o 00:27:34.566 CC examples/nvme/cmb_copy/cmb_copy.o 00:27:34.566 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:27:34.566 CC examples/nvme/hello_world/hello_world.o 00:27:34.566 LINK cmb_copy 00:27:34.566 LINK hello_world 00:27:34.566 LINK pmr_persistence 00:27:34.566 LINK hotplug 00:27:34.566 LINK abort 00:27:34.825 LINK reconnect 00:27:34.825 LINK arbitration 00:27:35.083 LINK nvme_manage 00:27:47.288 CC examples/accel/perf/accel_perf.o 00:27:47.288 CC examples/blob/hello_world/hello_blob.o 00:27:47.288 CC examples/blob/cli/blobcli.o 00:27:47.854 LINK hello_blob 00:27:48.421 LINK accel_perf 00:27:48.421 LINK blobcli 00:27:54.982 CC examples/bdev/hello_world/hello_bdev.o 00:27:54.982 CC examples/bdev/bdevperf/bdevperf.o 00:27:54.982 LINK hello_bdev 00:27:55.918 LINK bdevperf 00:28:08.152 CC examples/nvmf/nvmf/nvmf.o 00:28:08.152 LINK nvmf 00:28:18.130 make: Leaving directory '/mnt/sdadir/spdk' 00:28:18.130 02:23:26 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@101 -- # rm -rf /mnt/sdadir/spdk 00:29:25.822 02:24:23 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@102 -- # umount /mnt/sdadir 00:29:25.822 02:24:23 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@103 -- # rm -rf /mnt/sdadir 00:29:25.822 02:24:23 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@105 -- # stats=($(cat "/sys/block/$dev/stat")) 00:29:25.822 02:24:23 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@105 -- # cat /sys/block/sda/stat 00:29:25.822 READ IO cnt: 100 merges: 0 sectors: 3336 ticks: 93 00:29:25.822 WRITE IO cnt: 631206 merges: 622241 sectors: 10812160 ticks: 872526 00:29:25.822 in flight: 0 io ticks: 331378 time in queue: 946530 00:29:25.822 02:24:23 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@107 -- # printf 'READ IO cnt: % 8u merges: % 8u sectors: % 8u ticks: % 8u\n' 100 0 3336 93 00:29:25.822 02:24:23 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@109 -- # printf 'WRITE IO cnt: % 8u merges: % 8u sectors: % 8u ticks: % 8u\n' 631206 622241 10812160 872526 00:29:25.822 02:24:23 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@111 -- # printf 'in flight: % 8u io ticks: % 8u time in queue: % 8u\n' 0 331378 946530 00:29:25.822 02:24:23 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@1 -- # cleanup 00:29:25.822 02:24:23 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_delete Nvme0n1 00:29:25.822 [2024-07-23 02:24:24.000401] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (Nvme0n1p0) received event(SPDK_BDEV_EVENT_REMOVE) 00:29:25.822 02:24:24 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@13 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_error_delete EE_Malloc0 00:29:25.822 02:24:24 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:29:25.822 02:24:24 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@15 -- # killprocess 82233 00:29:25.822 02:24:24 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@948 -- # '[' -z 82233 ']' 00:29:25.822 02:24:24 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@952 -- # kill -0 82233 00:29:25.822 02:24:24 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@953 -- # uname 00:29:25.822 02:24:24 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:25.822 02:24:24 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 82233 00:29:25.822 killing process with pid 82233 00:29:25.822 02:24:24 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:29:25.822 02:24:24 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:29:25.822 02:24:24 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 82233' 00:29:25.822 02:24:24 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@967 -- # kill 82233 00:29:25.822 02:24:24 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@972 -- # wait 82233 00:29:25.822 02:24:27 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@17 -- # mountpoint -q /mnt/sdadir 00:29:25.822 02:24:27 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@18 -- # rm -rf /mnt/sdadir 00:29:25.822 Cleaning up iSCSI connection 00:29:25.822 02:24:27 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@20 -- # iscsicleanup 00:29:25.822 02:24:27 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@980 -- # echo 'Cleaning up iSCSI connection' 00:29:25.822 02:24:27 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@981 -- # iscsiadm -m node --logout 00:29:25.822 Logging out of session [sid: 72, target: iqn.2013-06.com.intel.ch.spdk:Target1, portal: 10.0.0.1,3260] 00:29:25.822 Logout of [sid: 72, target: iqn.2013-06.com.intel.ch.spdk:Target1, portal: 10.0.0.1,3260] successful. 00:29:25.822 02:24:27 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@982 -- # iscsiadm -m node -o delete 00:29:25.822 02:24:27 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@983 -- # rm -rf 00:29:25.822 02:24:27 iscsi_tgt.iscsi_tgt_ext4test -- ext4test/ext4test.sh@21 -- # iscsitestfini 00:29:25.822 02:24:27 iscsi_tgt.iscsi_tgt_ext4test -- iscsi_tgt/common.sh@131 -- # '[' '' == iso ']' 00:29:25.822 ************************************ 00:29:25.822 END TEST iscsi_tgt_ext4test 00:29:25.822 ************************************ 00:29:25.822 00:29:25.822 real 7m39.944s 00:29:25.822 user 12m3.993s 00:29:25.822 sys 3m3.320s 00:29:25.822 02:24:27 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:25.822 02:24:27 iscsi_tgt.iscsi_tgt_ext4test -- common/autotest_common.sh@10 -- # set +x 00:29:25.822 02:24:27 iscsi_tgt -- common/autotest_common.sh@1142 -- # return 0 00:29:25.822 02:24:27 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@49 -- # '[' 1 -eq 1 ']' 00:29:25.822 02:24:27 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@50 -- # hash ceph 00:29:25.822 02:24:27 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@54 -- # run_test iscsi_tgt_rbd /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/rbd/rbd.sh 00:29:25.822 02:24:27 iscsi_tgt -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:29:25.822 02:24:27 iscsi_tgt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:25.822 02:24:27 iscsi_tgt -- common/autotest_common.sh@10 -- # set +x 00:29:25.822 ************************************ 00:29:25.822 START TEST iscsi_tgt_rbd 00:29:25.822 ************************************ 00:29:25.822 02:24:27 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/rbd/rbd.sh 00:29:25.822 * Looking for test storage... 00:29:25.822 * Found test storage at /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/rbd 00:29:25.822 02:24:27 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:29:25.822 02:24:27 iscsi_tgt.iscsi_tgt_rbd -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:29:25.822 02:24:27 iscsi_tgt.iscsi_tgt_rbd -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:29:25.822 02:24:27 iscsi_tgt.iscsi_tgt_rbd -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:29:25.823 02:24:27 iscsi_tgt.iscsi_tgt_rbd -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:29:25.823 02:24:27 iscsi_tgt.iscsi_tgt_rbd -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:29:25.823 02:24:27 iscsi_tgt.iscsi_tgt_rbd -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:29:25.823 02:24:27 iscsi_tgt.iscsi_tgt_rbd -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:29:25.823 02:24:27 iscsi_tgt.iscsi_tgt_rbd -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:29:25.823 02:24:27 iscsi_tgt.iscsi_tgt_rbd -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:29:25.823 02:24:27 iscsi_tgt.iscsi_tgt_rbd -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:29:25.823 02:24:27 iscsi_tgt.iscsi_tgt_rbd -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:29:25.823 02:24:27 iscsi_tgt.iscsi_tgt_rbd -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:29:25.823 02:24:27 iscsi_tgt.iscsi_tgt_rbd -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:29:25.823 02:24:27 iscsi_tgt.iscsi_tgt_rbd -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:29:25.823 02:24:27 iscsi_tgt.iscsi_tgt_rbd -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:29:25.823 02:24:27 iscsi_tgt.iscsi_tgt_rbd -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:29:25.823 02:24:27 iscsi_tgt.iscsi_tgt_rbd -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:29:25.823 02:24:27 iscsi_tgt.iscsi_tgt_rbd -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:29:25.823 02:24:27 iscsi_tgt.iscsi_tgt_rbd -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:29:25.823 02:24:27 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@11 -- # iscsitestinit 00:29:25.823 02:24:27 iscsi_tgt.iscsi_tgt_rbd -- iscsi_tgt/common.sh@109 -- # '[' '' == iso ']' 00:29:25.823 02:24:27 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@13 -- # timing_enter rbd_setup 00:29:25.823 02:24:27 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:25.823 02:24:27 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@10 -- # set +x 00:29:25.823 02:24:27 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@14 -- # rbd_setup 10.0.0.1 spdk_iscsi_ns 00:29:25.823 02:24:27 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@1005 -- # '[' -z 10.0.0.1 ']' 00:29:25.823 02:24:27 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@1009 -- # '[' -n spdk_iscsi_ns ']' 00:29:25.823 02:24:27 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@1010 -- # ip netns list 00:29:25.823 02:24:27 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@1010 -- # grep spdk_iscsi_ns 00:29:25.823 spdk_iscsi_ns (id: 0) 00:29:25.823 02:24:27 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@1011 -- # NS_CMD='ip netns exec spdk_iscsi_ns' 00:29:25.823 02:24:27 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@1018 -- # hash ceph 00:29:25.823 02:24:27 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@1019 -- # export PG_NUM=128 00:29:25.823 02:24:27 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@1019 -- # PG_NUM=128 00:29:25.823 02:24:27 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@1020 -- # export RBD_POOL=rbd 00:29:25.823 02:24:27 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@1020 -- # RBD_POOL=rbd 00:29:25.823 02:24:27 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@1021 -- # export RBD_NAME=foo 00:29:25.823 02:24:27 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@1021 -- # RBD_NAME=foo 00:29:25.823 02:24:27 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@1022 -- # ip netns exec spdk_iscsi_ns /home/vagrant/spdk_repo/spdk/scripts/ceph/stop.sh 00:29:25.823 + base_dir=/var/tmp/ceph 00:29:25.823 + image=/var/tmp/ceph/ceph_raw.img 00:29:25.823 + dev=/dev/loop200 00:29:25.823 + pkill -9 ceph 00:29:25.823 + sleep 3 00:29:25.823 + umount /dev/loop200p2 00:29:25.823 umount: /dev/loop200p2: no mount point specified. 00:29:25.823 + losetup -d /dev/loop200 00:29:25.823 losetup: /dev/loop200: failed to use device: No such device 00:29:25.823 + rm -rf /var/tmp/ceph 00:29:25.823 02:24:30 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@1023 -- # ip netns exec spdk_iscsi_ns /home/vagrant/spdk_repo/spdk/scripts/ceph/start.sh 10.0.0.1 00:29:25.823 + set -e 00:29:25.823 +++ dirname /home/vagrant/spdk_repo/spdk/scripts/ceph/start.sh 00:29:25.823 ++ readlink -f /home/vagrant/spdk_repo/spdk/scripts/ceph 00:29:25.823 + script_dir=/home/vagrant/spdk_repo/spdk/scripts/ceph 00:29:25.823 + base_dir=/var/tmp/ceph 00:29:25.823 + mon_ip=10.0.0.1 00:29:25.823 + mon_dir=/var/tmp/ceph/mon.a 00:29:25.823 + pid_dir=/var/tmp/ceph/pid 00:29:25.823 + ceph_conf=/var/tmp/ceph/ceph.conf 00:29:25.823 + mnt_dir=/var/tmp/ceph/mnt 00:29:25.823 + image=/var/tmp/ceph_raw.img 00:29:25.823 + dev=/dev/loop200 00:29:25.823 + modprobe loop 00:29:25.823 + umount /dev/loop200p2 00:29:25.823 umount: /dev/loop200p2: no mount point specified. 00:29:25.823 + true 00:29:25.823 + losetup -d /dev/loop200 00:29:25.823 losetup: /dev/loop200: failed to use device: No such device 00:29:25.823 + true 00:29:25.823 + '[' -d /var/tmp/ceph ']' 00:29:25.823 + mkdir /var/tmp/ceph 00:29:25.823 + cp /home/vagrant/spdk_repo/spdk/scripts/ceph/ceph.conf /var/tmp/ceph/ceph.conf 00:29:25.823 + '[' '!' -e /var/tmp/ceph_raw.img ']' 00:29:25.823 + fallocate -l 4G /var/tmp/ceph_raw.img 00:29:25.823 + mknod /dev/loop200 b 7 200 00:29:25.823 + losetup /dev/loop200 /var/tmp/ceph_raw.img 00:29:25.823 Partitioning /dev/loop200 00:29:25.823 + PARTED='parted -s' 00:29:25.823 + SGDISK=sgdisk 00:29:25.823 + echo 'Partitioning /dev/loop200' 00:29:25.823 + parted -s /dev/loop200 mktable gpt 00:29:25.823 + sleep 2 00:29:25.823 + parted -s /dev/loop200 mkpart primary 0% 2GiB 00:29:25.823 + parted -s /dev/loop200 mkpart primary 2GiB 100% 00:29:25.823 Setting name on /dev/loop200 00:29:25.823 + partno=0 00:29:25.823 + echo 'Setting name on /dev/loop200' 00:29:25.823 + sgdisk -c 1:osd-device-0-journal /dev/loop200 00:29:25.823 Warning: The kernel is still using the old partition table. 00:29:25.823 The new table will be used at the next reboot or after you 00:29:25.823 run partprobe(8) or kpartx(8) 00:29:25.823 The operation has completed successfully. 00:29:25.823 + sgdisk -c 2:osd-device-0-data /dev/loop200 00:29:26.082 Warning: The kernel is still using the old partition table. 00:29:26.082 The new table will be used at the next reboot or after you 00:29:26.082 run partprobe(8) or kpartx(8) 00:29:26.082 The operation has completed successfully. 00:29:26.082 + kpartx /dev/loop200 00:29:26.082 loop200p1 : 0 4192256 /dev/loop200 2048 00:29:26.082 loop200p2 : 0 4192256 /dev/loop200 4194304 00:29:26.082 ++ awk '{print $3}' 00:29:26.082 ++ ceph -v 00:29:26.340 + ceph_version=17.2.7 00:29:26.340 + ceph_maj=17 00:29:26.340 + '[' 17 -gt 12 ']' 00:29:26.340 + update_config=true 00:29:26.340 + rm -f /var/log/ceph/ceph-mon.a.log 00:29:26.340 + set_min_mon_release='--set-min-mon-release 14' 00:29:26.340 + ceph_osd_extra_config='--check-needs-journal --no-mon-config' 00:29:26.340 + mnt_pt=/var/tmp/ceph/mnt/osd-device-0-data 00:29:26.340 + mkdir -p /var/tmp/ceph/mnt/osd-device-0-data 00:29:26.340 + mkfs.xfs -f /dev/disk/by-partlabel/osd-device-0-data 00:29:26.340 meta-data=/dev/disk/by-partlabel/osd-device-0-data isize=512 agcount=4, agsize=131008 blks 00:29:26.340 = sectsz=512 attr=2, projid32bit=1 00:29:26.340 = crc=1 finobt=1, sparse=1, rmapbt=0 00:29:26.341 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:29:26.341 data = bsize=4096 blocks=524032, imaxpct=25 00:29:26.341 = sunit=0 swidth=0 blks 00:29:26.341 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:29:26.341 log =internal log bsize=4096 blocks=16384, version=2 00:29:26.341 = sectsz=512 sunit=0 blks, lazy-count=1 00:29:26.341 realtime =none extsz=4096 blocks=0, rtextents=0 00:29:26.341 Discarding blocks...Done. 00:29:26.341 + mount /dev/disk/by-partlabel/osd-device-0-data /var/tmp/ceph/mnt/osd-device-0-data 00:29:26.341 + cat 00:29:26.341 + rm -rf '/var/tmp/ceph/mon.a/*' 00:29:26.341 + mkdir -p /var/tmp/ceph/mon.a 00:29:26.341 + mkdir -p /var/tmp/ceph/pid 00:29:26.341 + rm -f /etc/ceph/ceph.client.admin.keyring 00:29:26.341 + ceph-authtool --create-keyring --gen-key --name=mon. /var/tmp/ceph/keyring --cap mon 'allow *' 00:29:26.341 creating /var/tmp/ceph/keyring 00:29:26.341 + ceph-authtool --gen-key --name=client.admin --cap mon 'allow *' --cap osd 'allow *' --cap mds 'allow *' --cap mgr 'allow *' /var/tmp/ceph/keyring 00:29:26.341 + monmaptool --create --clobber --add a 10.0.0.1:12046 --print /var/tmp/ceph/monmap --set-min-mon-release 14 00:29:26.341 monmaptool: monmap file /var/tmp/ceph/monmap 00:29:26.341 monmaptool: generated fsid 04fd14fe-2437-4f63-a854-d36fb9700fc5 00:29:26.341 setting min_mon_release = octopus 00:29:26.341 epoch 0 00:29:26.341 fsid 04fd14fe-2437-4f63-a854-d36fb9700fc5 00:29:26.341 last_changed 2024-07-23T02:24:35.057955+0000 00:29:26.341 created 2024-07-23T02:24:35.057955+0000 00:29:26.341 min_mon_release 15 (octopus) 00:29:26.341 election_strategy: 1 00:29:26.341 0: v2:10.0.0.1:12046/0 mon.a 00:29:26.341 monmaptool: writing epoch 0 to /var/tmp/ceph/monmap (1 monitors) 00:29:26.341 + sh -c 'ulimit -c unlimited && exec ceph-mon --mkfs -c /var/tmp/ceph/ceph.conf -i a --monmap=/var/tmp/ceph/monmap --keyring=/var/tmp/ceph/keyring --mon-data=/var/tmp/ceph/mon.a' 00:29:26.600 + '[' true = true ']' 00:29:26.600 + sed -i 's/mon addr = /mon addr = v2:/g' /var/tmp/ceph/ceph.conf 00:29:26.600 + cp /var/tmp/ceph/keyring /var/tmp/ceph/mon.a/keyring 00:29:26.600 + cp /var/tmp/ceph/ceph.conf /etc/ceph/ceph.conf 00:29:26.600 + cp /var/tmp/ceph/keyring /etc/ceph/keyring 00:29:26.600 + cp /var/tmp/ceph/keyring /etc/ceph/ceph.client.admin.keyring 00:29:26.600 + chmod a+r /etc/ceph/ceph.client.admin.keyring 00:29:26.600 ++ hostname 00:29:26.600 + ceph-run sh -c 'ulimit -n 16384 && ulimit -c unlimited && exec ceph-mon -c /var/tmp/ceph/ceph.conf -i a --keyring=/var/tmp/ceph/keyring --pid-file=/var/tmp/ceph/pid/root@fedora38-cloud-1716830599-074-updated-1705279005.pid --mon-data=/var/tmp/ceph/mon.a' 00:29:26.600 + true 00:29:26.600 + '[' true = true ']' 00:29:26.600 + ceph-conf --name mon.a --show-config-value log_file 00:29:26.600 /var/log/ceph/ceph-mon.a.log 00:29:26.600 ++ ceph -s 00:29:26.600 ++ grep id 00:29:26.600 ++ awk '{print $2}' 00:29:26.859 + fsid=04fd14fe-2437-4f63-a854-d36fb9700fc5 00:29:26.859 + sed -i 's/perf = true/perf = true\n\tfsid = 04fd14fe-2437-4f63-a854-d36fb9700fc5 \n/g' /var/tmp/ceph/ceph.conf 00:29:26.859 + (( ceph_maj < 18 )) 00:29:26.859 + sed -i 's/perf = true/perf = true\n\tosd objectstore = filestore\n/g' /var/tmp/ceph/ceph.conf 00:29:26.859 + cat /var/tmp/ceph/ceph.conf 00:29:26.859 [global] 00:29:26.859 debug_lockdep = 0/0 00:29:26.859 debug_context = 0/0 00:29:26.859 debug_crush = 0/0 00:29:26.859 debug_buffer = 0/0 00:29:26.859 debug_timer = 0/0 00:29:26.859 debug_filer = 0/0 00:29:26.859 debug_objecter = 0/0 00:29:26.859 debug_rados = 0/0 00:29:26.859 debug_rbd = 0/0 00:29:26.859 debug_ms = 0/0 00:29:26.859 debug_monc = 0/0 00:29:26.859 debug_tp = 0/0 00:29:26.859 debug_auth = 0/0 00:29:26.859 debug_finisher = 0/0 00:29:26.859 debug_heartbeatmap = 0/0 00:29:26.859 debug_perfcounter = 0/0 00:29:26.859 debug_asok = 0/0 00:29:26.859 debug_throttle = 0/0 00:29:26.859 debug_mon = 0/0 00:29:26.859 debug_paxos = 0/0 00:29:26.859 debug_rgw = 0/0 00:29:26.859 00:29:26.859 perf = true 00:29:26.859 osd objectstore = filestore 00:29:26.859 00:29:26.859 fsid = 04fd14fe-2437-4f63-a854-d36fb9700fc5 00:29:26.859 00:29:26.859 mutex_perf_counter = false 00:29:26.859 throttler_perf_counter = false 00:29:26.859 rbd cache = false 00:29:26.859 mon_allow_pool_delete = true 00:29:26.859 00:29:26.859 osd_pool_default_size = 1 00:29:26.859 00:29:26.859 [mon] 00:29:26.859 mon_max_pool_pg_num=166496 00:29:26.859 mon_osd_max_split_count = 10000 00:29:26.859 mon_pg_warn_max_per_osd = 10000 00:29:26.859 00:29:26.859 [osd] 00:29:26.859 osd_op_threads = 64 00:29:26.859 filestore_queue_max_ops=5000 00:29:26.859 filestore_queue_committing_max_ops=5000 00:29:26.859 journal_max_write_entries=1000 00:29:26.859 journal_queue_max_ops=3000 00:29:26.859 objecter_inflight_ops=102400 00:29:26.859 filestore_wbthrottle_enable=false 00:29:26.859 filestore_queue_max_bytes=1048576000 00:29:26.859 filestore_queue_committing_max_bytes=1048576000 00:29:26.859 journal_max_write_bytes=1048576000 00:29:26.859 journal_queue_max_bytes=1048576000 00:29:26.859 ms_dispatch_throttle_bytes=1048576000 00:29:26.859 objecter_inflight_op_bytes=1048576000 00:29:26.859 filestore_max_sync_interval=10 00:29:26.859 osd_client_message_size_cap = 0 00:29:26.859 osd_client_message_cap = 0 00:29:26.859 osd_enable_op_tracker = false 00:29:26.859 filestore_fd_cache_size = 10240 00:29:26.859 filestore_fd_cache_shards = 64 00:29:26.859 filestore_op_threads = 16 00:29:26.859 osd_op_num_shards = 48 00:29:26.859 osd_op_num_threads_per_shard = 2 00:29:26.859 osd_pg_object_context_cache_count = 10240 00:29:26.859 filestore_odsync_write = True 00:29:26.859 journal_dynamic_throttle = True 00:29:26.859 00:29:26.859 [osd.0] 00:29:26.859 osd data = /var/tmp/ceph/mnt/osd-device-0-data 00:29:26.859 osd journal = /dev/disk/by-partlabel/osd-device-0-journal 00:29:26.859 00:29:26.859 # add mon address 00:29:26.859 [mon.a] 00:29:26.859 mon addr = v2:10.0.0.1:12046 00:29:26.859 + i=0 00:29:26.859 + mkdir -p /var/tmp/ceph/mnt 00:29:26.859 ++ uuidgen 00:29:26.859 + uuid=bbf1655c-fad8-47af-918f-8c0034868848 00:29:26.859 + ceph -c /var/tmp/ceph/ceph.conf osd create bbf1655c-fad8-47af-918f-8c0034868848 0 00:29:27.118 0 00:29:27.118 + ceph-osd -c /var/tmp/ceph/ceph.conf -i 0 --mkfs --mkkey --osd-uuid bbf1655c-fad8-47af-918f-8c0034868848 --check-needs-journal --no-mon-config 00:29:27.118 2024-07-23T02:24:35.818+0000 7fcea616d400 -1 auth: error reading file: /var/tmp/ceph/mnt/osd-device-0-data/keyring: can't open /var/tmp/ceph/mnt/osd-device-0-data/keyring: (2) No such file or directory 00:29:27.118 2024-07-23T02:24:35.819+0000 7fcea616d400 -1 created new key in keyring /var/tmp/ceph/mnt/osd-device-0-data/keyring 00:29:27.118 2024-07-23T02:24:35.871+0000 7fcea616d400 -1 journal check: ondisk fsid 00000000-0000-0000-0000-000000000000 doesn't match expected bbf1655c-fad8-47af-918f-8c0034868848, invalid (someone else's?) journal 00:29:27.377 2024-07-23T02:24:35.911+0000 7fcea616d400 -1 journal do_read_entry(4096): bad header magic 00:29:27.377 2024-07-23T02:24:35.911+0000 7fcea616d400 -1 journal do_read_entry(4096): bad header magic 00:29:27.377 ++ hostname 00:29:27.377 + ceph -c /var/tmp/ceph/ceph.conf osd crush add osd.0 1.0 host=fedora38-cloud-1716830599-074-updated-1705279005 root=default 00:29:28.342 add item id 0 name 'osd.0' weight 1 at location {host=fedora38-cloud-1716830599-074-updated-1705279005,root=default} to crush map 00:29:28.342 + ceph -c /var/tmp/ceph/ceph.conf -i /var/tmp/ceph/mnt/osd-device-0-data/keyring auth add osd.0 osd 'allow *' mon 'allow profile osd' mgr 'allow *' 00:29:28.342 added key for osd.0 00:29:28.342 ++ ceph -c /var/tmp/ceph/ceph.conf config get osd osd_class_dir 00:29:28.600 + class_dir=/lib64/rados-classes 00:29:28.600 + [[ -e /lib64/rados-classes ]] 00:29:28.600 + ceph -c /var/tmp/ceph/ceph.conf config set osd osd_class_dir /lib64/rados-classes 00:29:28.859 + pkill -9 ceph-osd 00:29:28.859 + true 00:29:28.859 + sleep 2 00:29:31.389 + mkdir -p /var/tmp/ceph/pid 00:29:31.389 + env -i TCMALLOC_MAX_TOTAL_THREAD_CACHE_BYTES=134217728 ceph-osd -c /var/tmp/ceph/ceph.conf -i 0 --pid-file=/var/tmp/ceph/pid/ceph-osd.0.pid 00:29:31.389 2024-07-23T02:24:39.654+0000 7f4508a56400 -1 Falling back to public interface 00:29:31.389 2024-07-23T02:24:39.704+0000 7f4508a56400 -1 journal do_read_entry(8192): bad header magic 00:29:31.389 2024-07-23T02:24:39.704+0000 7f4508a56400 -1 journal do_read_entry(8192): bad header magic 00:29:31.389 2024-07-23T02:24:39.734+0000 7f4508a56400 -1 osd.0 0 log_to_monitors true 00:29:31.389 02:24:39 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@1025 -- # ip netns exec spdk_iscsi_ns ceph osd pool create rbd 128 00:29:32.091 pool 'rbd' created 00:29:32.091 02:24:40 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@1026 -- # ip netns exec spdk_iscsi_ns rbd create foo --size 1000 00:29:37.362 02:24:45 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@15 -- # trap 'rbd_cleanup; exit 1' SIGINT SIGTERM EXIT 00:29:37.362 02:24:45 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@16 -- # timing_exit rbd_setup 00:29:37.362 02:24:45 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:37.362 02:24:45 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@10 -- # set +x 00:29:37.362 02:24:46 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@18 -- # fio_py=/home/vagrant/spdk_repo/spdk/scripts/fio-wrapper 00:29:37.362 02:24:46 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@20 -- # timing_enter start_iscsi_tgt 00:29:37.362 02:24:46 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:37.362 02:24:46 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@10 -- # set +x 00:29:37.362 02:24:46 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@23 -- # pid=122920 00:29:37.362 02:24:46 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@25 -- # trap 'killprocess $pid; rbd_cleanup; iscsitestfini; exit 1' SIGINT SIGTERM EXIT 00:29:37.362 02:24:46 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@22 -- # ip netns exec spdk_iscsi_ns /home/vagrant/spdk_repo/spdk/build/bin/iscsi_tgt -m 0xF --wait-for-rpc 00:29:37.362 02:24:46 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@27 -- # waitforlisten 122920 00:29:37.362 02:24:46 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@829 -- # '[' -z 122920 ']' 00:29:37.362 02:24:46 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:37.362 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:37.362 02:24:46 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:37.362 02:24:46 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:37.362 02:24:46 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:37.362 02:24:46 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@10 -- # set +x 00:29:37.621 [2024-07-23 02:24:46.219967] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:29:37.621 [2024-07-23 02:24:46.220169] [ DPDK EAL parameters: iscsi --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid122920 ] 00:29:37.879 [2024-07-23 02:24:46.399024] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:38.136 [2024-07-23 02:24:46.670094] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:38.136 [2024-07-23 02:24:46.670238] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:29:38.136 [2024-07-23 02:24:46.671335] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:29:38.136 [2024-07-23 02:24:46.671360] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:38.395 02:24:47 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:38.395 02:24:47 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@862 -- # return 0 00:29:38.395 02:24:47 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@28 -- # rpc_cmd iscsi_set_options -o 30 -a 16 00:29:38.395 02:24:47 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:38.395 02:24:47 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@10 -- # set +x 00:29:38.395 02:24:47 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:38.395 02:24:47 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@29 -- # rpc_cmd framework_start_init 00:29:38.395 02:24:47 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:38.395 02:24:47 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@10 -- # set +x 00:29:39.331 02:24:47 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:39.331 iscsi_tgt is listening. Running tests... 00:29:39.331 02:24:47 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@30 -- # echo 'iscsi_tgt is listening. Running tests...' 00:29:39.331 02:24:47 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@32 -- # timing_exit start_iscsi_tgt 00:29:39.331 02:24:47 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:39.331 02:24:47 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@10 -- # set +x 00:29:39.331 02:24:47 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@34 -- # rpc_cmd iscsi_create_portal_group 1 10.0.0.1:3260 00:29:39.331 02:24:47 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:39.331 02:24:47 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@10 -- # set +x 00:29:39.331 02:24:47 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:39.331 02:24:47 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@35 -- # rpc_cmd iscsi_create_initiator_group 2 ANY 10.0.0.2/32 00:29:39.331 02:24:47 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:39.331 02:24:47 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@10 -- # set +x 00:29:39.331 02:24:47 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:39.331 02:24:47 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@36 -- # rpc_cmd bdev_rbd_register_cluster iscsi_rbd_cluster --key-file /etc/ceph/ceph.client.admin.keyring --config-file /etc/ceph/ceph.conf 00:29:39.331 02:24:47 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:39.331 02:24:47 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@10 -- # set +x 00:29:39.331 02:24:47 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:39.331 02:24:47 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@36 -- # rbd_cluster_name=iscsi_rbd_cluster 00:29:39.331 02:24:47 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@37 -- # rpc_cmd bdev_rbd_get_clusters_info -b iscsi_rbd_cluster 00:29:39.331 02:24:47 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:39.331 02:24:47 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@10 -- # set +x 00:29:39.331 { 00:29:39.331 "cluster_name": "iscsi_rbd_cluster", 00:29:39.331 "config_file": "/etc/ceph/ceph.conf", 00:29:39.331 "key_file": "/etc/ceph/ceph.client.admin.keyring" 00:29:39.331 } 00:29:39.331 02:24:47 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:39.331 02:24:47 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@38 -- # rpc_cmd bdev_rbd_create rbd foo 4096 -c iscsi_rbd_cluster 00:29:39.331 02:24:47 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:39.331 02:24:47 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@10 -- # set +x 00:29:39.331 [2024-07-23 02:24:47.970776] bdev_rbd.c:1359:bdev_rbd_create: *NOTICE*: Add Ceph0 rbd disk to lun 00:29:39.331 02:24:47 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:39.331 02:24:47 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@38 -- # rbd_bdev=Ceph0 00:29:39.331 02:24:47 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@39 -- # rpc_cmd bdev_get_bdevs 00:29:39.331 02:24:47 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:39.331 02:24:47 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@10 -- # set +x 00:29:39.331 [ 00:29:39.331 { 00:29:39.331 "name": "Ceph0", 00:29:39.331 "aliases": [ 00:29:39.331 "7665da46-a7f1-48c5-9ebe-700e27a60963" 00:29:39.331 ], 00:29:39.331 "product_name": "Ceph Rbd Disk", 00:29:39.331 "block_size": 4096, 00:29:39.331 "num_blocks": 256000, 00:29:39.331 "uuid": "7665da46-a7f1-48c5-9ebe-700e27a60963", 00:29:39.331 "assigned_rate_limits": { 00:29:39.331 "rw_ios_per_sec": 0, 00:29:39.331 "rw_mbytes_per_sec": 0, 00:29:39.331 "r_mbytes_per_sec": 0, 00:29:39.331 "w_mbytes_per_sec": 0 00:29:39.331 }, 00:29:39.331 "claimed": false, 00:29:39.331 "zoned": false, 00:29:39.331 "supported_io_types": { 00:29:39.331 "read": true, 00:29:39.331 "write": true, 00:29:39.331 "unmap": true, 00:29:39.331 "flush": true, 00:29:39.331 "reset": true, 00:29:39.331 "nvme_admin": false, 00:29:39.331 "nvme_io": false, 00:29:39.331 "nvme_io_md": false, 00:29:39.331 "write_zeroes": true, 00:29:39.331 "zcopy": false, 00:29:39.331 "get_zone_info": false, 00:29:39.331 "zone_management": false, 00:29:39.331 "zone_append": false, 00:29:39.331 "compare": false, 00:29:39.331 "compare_and_write": true, 00:29:39.331 "abort": false, 00:29:39.331 "seek_hole": false, 00:29:39.331 "seek_data": false, 00:29:39.331 "copy": false, 00:29:39.331 "nvme_iov_md": false 00:29:39.331 }, 00:29:39.331 "driver_specific": { 00:29:39.331 "rbd": { 00:29:39.331 "pool_name": "rbd", 00:29:39.331 "rbd_name": "foo", 00:29:39.331 "config_file": "/etc/ceph/ceph.conf", 00:29:39.331 "key_file": "/etc/ceph/ceph.client.admin.keyring" 00:29:39.331 } 00:29:39.331 } 00:29:39.331 } 00:29:39.331 ] 00:29:39.331 02:24:47 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:39.331 02:24:47 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@41 -- # rpc_cmd bdev_rbd_resize Ceph0 2000 00:29:39.331 02:24:47 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:39.331 02:24:47 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@10 -- # set +x 00:29:39.331 true 00:29:39.331 02:24:48 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:39.331 02:24:48 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@42 -- # rpc_cmd bdev_get_bdevs 00:29:39.331 02:24:48 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:39.331 02:24:48 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@42 -- # grep num_blocks 00:29:39.331 02:24:48 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@10 -- # set +x 00:29:39.331 02:24:48 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@42 -- # sed 's/[^[:digit:]]//g' 00:29:39.331 02:24:48 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:39.331 02:24:48 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@42 -- # num_block=512000 00:29:39.331 02:24:48 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@44 -- # total_size=2000 00:29:39.331 02:24:48 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@45 -- # '[' 2000 '!=' 2000 ']' 00:29:39.331 02:24:48 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@53 -- # rpc_cmd iscsi_create_target_node Target3 Target3_alias Ceph0:0 1:2 64 -d 00:29:39.331 02:24:48 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:39.331 02:24:48 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@10 -- # set +x 00:29:39.331 02:24:48 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:39.331 02:24:48 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@54 -- # sleep 1 00:29:40.708 02:24:49 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@56 -- # iscsiadm -m discovery -t sendtargets -p 10.0.0.1:3260 00:29:40.708 10.0.0.1:3260,1 iqn.2016-06.io.spdk:Target3 00:29:40.708 02:24:49 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@57 -- # iscsiadm -m node --login -p 10.0.0.1:3260 00:29:40.708 Logging in to [iface: default, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] 00:29:40.708 Login to [iface: default, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] successful. 00:29:40.708 02:24:49 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@58 -- # waitforiscsidevices 1 00:29:40.708 02:24:49 iscsi_tgt.iscsi_tgt_rbd -- iscsi_tgt/common.sh@116 -- # local num=1 00:29:40.708 02:24:49 iscsi_tgt.iscsi_tgt_rbd -- iscsi_tgt/common.sh@118 -- # (( i = 1 )) 00:29:40.708 02:24:49 iscsi_tgt.iscsi_tgt_rbd -- iscsi_tgt/common.sh@118 -- # (( i <= 20 )) 00:29:40.708 02:24:49 iscsi_tgt.iscsi_tgt_rbd -- iscsi_tgt/common.sh@119 -- # iscsiadm -m session -P 3 00:29:40.708 02:24:49 iscsi_tgt.iscsi_tgt_rbd -- iscsi_tgt/common.sh@119 -- # grep -c 'Attached scsi disk sd[a-z]*' 00:29:40.708 [2024-07-23 02:24:49.131807] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:29:40.708 02:24:49 iscsi_tgt.iscsi_tgt_rbd -- iscsi_tgt/common.sh@119 -- # n=1 00:29:40.708 02:24:49 iscsi_tgt.iscsi_tgt_rbd -- iscsi_tgt/common.sh@120 -- # '[' 1 -ne 1 ']' 00:29:40.708 02:24:49 iscsi_tgt.iscsi_tgt_rbd -- iscsi_tgt/common.sh@123 -- # return 0 00:29:40.708 02:24:49 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@60 -- # trap 'iscsicleanup; killprocess $pid; rbd_cleanup; exit 1' SIGINT SIGTERM EXIT 00:29:40.708 02:24:49 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p iscsi -i 4096 -d 1 -t randrw -r 1 -v 00:29:40.708 [global] 00:29:40.708 thread=1 00:29:40.708 invalidate=1 00:29:40.708 rw=randrw 00:29:40.708 time_based=1 00:29:40.708 runtime=1 00:29:40.708 ioengine=libaio 00:29:40.708 direct=1 00:29:40.708 bs=4096 00:29:40.708 iodepth=1 00:29:40.708 norandommap=0 00:29:40.708 numjobs=1 00:29:40.708 00:29:40.708 verify_dump=1 00:29:40.708 verify_backlog=512 00:29:40.708 verify_state_save=0 00:29:40.708 do_verify=1 00:29:40.708 verify=crc32c-intel 00:29:40.708 [job0] 00:29:40.708 filename=/dev/sda 00:29:40.708 queue_depth set to 113 (sda) 00:29:40.708 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:29:40.708 fio-3.35 00:29:40.708 Starting 1 thread 00:29:40.708 [2024-07-23 02:24:49.314905] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:29:42.085 [2024-07-23 02:24:50.435755] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:29:42.085 00:29:42.085 job0: (groupid=0, jobs=1): err= 0: pid=123040: Tue Jul 23 02:24:50 2024 00:29:42.085 read: IOPS=50, BW=203KiB/s (208kB/s)(204KiB/1005msec) 00:29:42.085 slat (nsec): min=12480, max=63881, avg=34486.63, stdev=11611.72 00:29:42.085 clat (usec): min=214, max=2678, avg=500.14, stdev=402.89 00:29:42.085 lat (usec): min=236, max=2691, avg=534.62, stdev=400.69 00:29:42.085 clat percentiles (usec): 00:29:42.085 | 1.00th=[ 215], 5.00th=[ 241], 10.00th=[ 247], 20.00th=[ 277], 00:29:42.085 | 30.00th=[ 334], 40.00th=[ 383], 50.00th=[ 433], 60.00th=[ 474], 00:29:42.085 | 70.00th=[ 494], 80.00th=[ 523], 90.00th=[ 619], 95.00th=[ 1319], 00:29:42.085 | 99.00th=[ 2671], 99.50th=[ 2671], 99.90th=[ 2671], 99.95th=[ 2671], 00:29:42.085 | 99.99th=[ 2671] 00:29:42.085 bw ( KiB/s): min= 184, max= 224, per=100.00%, avg=204.00, stdev=28.28, samples=2 00:29:42.085 iops : min= 46, max= 56, avg=51.00, stdev= 7.07, samples=2 00:29:42.085 write: IOPS=56, BW=227KiB/s (232kB/s)(228KiB/1005msec); 0 zone resets 00:29:42.085 slat (nsec): min=22438, max=76629, avg=38971.40, stdev=12187.31 00:29:42.085 clat (usec): min=5491, max=40868, avg=17086.88, stdev=5024.22 00:29:42.085 lat (usec): min=5525, max=40892, avg=17125.85, stdev=5023.54 00:29:42.085 clat percentiles (usec): 00:29:42.085 | 1.00th=[ 5473], 5.00th=[ 6390], 10.00th=[12256], 20.00th=[15008], 00:29:42.085 | 30.00th=[16581], 40.00th=[17171], 50.00th=[17433], 60.00th=[18220], 00:29:42.085 | 70.00th=[18482], 80.00th=[19006], 90.00th=[20579], 95.00th=[23462], 00:29:42.085 | 99.00th=[40633], 99.50th=[40633], 99.90th=[40633], 99.95th=[40633], 00:29:42.085 | 99.99th=[40633] 00:29:42.085 bw ( KiB/s): min= 200, max= 248, per=98.74%, avg=224.00, stdev=33.94, samples=2 00:29:42.085 iops : min= 50, max= 62, avg=56.00, stdev= 8.49, samples=2 00:29:42.085 lat (usec) : 250=6.48%, 500=28.70%, 750=8.33%, 1000=0.93% 00:29:42.085 lat (msec) : 2=1.85%, 4=0.93%, 10=3.70%, 20=42.59%, 50=6.48% 00:29:42.085 cpu : usr=0.20%, sys=0.30%, ctx=108, majf=0, minf=1 00:29:42.085 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:42.085 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:42.085 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:42.085 issued rwts: total=51,57,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:42.085 latency : target=0, window=0, percentile=100.00%, depth=1 00:29:42.085 00:29:42.085 Run status group 0 (all jobs): 00:29:42.085 READ: bw=203KiB/s (208kB/s), 203KiB/s-203KiB/s (208kB/s-208kB/s), io=204KiB (209kB), run=1005-1005msec 00:29:42.085 WRITE: bw=227KiB/s (232kB/s), 227KiB/s-227KiB/s (232kB/s-232kB/s), io=228KiB (233kB), run=1005-1005msec 00:29:42.085 00:29:42.085 Disk stats (read/write): 00:29:42.085 sda: ios=89/48, merge=0/0, ticks=36/858, in_queue=895, util=90.96% 00:29:42.085 02:24:50 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p iscsi -i 131072 -d 32 -t randrw -r 1 -v 00:29:42.085 [global] 00:29:42.085 thread=1 00:29:42.085 invalidate=1 00:29:42.085 rw=randrw 00:29:42.085 time_based=1 00:29:42.085 runtime=1 00:29:42.085 ioengine=libaio 00:29:42.085 direct=1 00:29:42.085 bs=131072 00:29:42.085 iodepth=32 00:29:42.085 norandommap=0 00:29:42.085 numjobs=1 00:29:42.085 00:29:42.085 verify_dump=1 00:29:42.085 verify_backlog=512 00:29:42.085 verify_state_save=0 00:29:42.085 do_verify=1 00:29:42.085 verify=crc32c-intel 00:29:42.085 [job0] 00:29:42.085 filename=/dev/sda 00:29:42.085 queue_depth set to 113 (sda) 00:29:42.085 job0: (g=0): rw=randrw, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=libaio, iodepth=32 00:29:42.085 fio-3.35 00:29:42.085 Starting 1 thread 00:29:42.085 [2024-07-23 02:24:50.645570] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:29:43.989 [2024-07-23 02:24:52.449904] scsi_bdev.c: 616:bdev_scsi_inquiry: *NOTICE*: unsupported INQUIRY VPD page 0xb9 00:29:43.989 00:29:43.989 job0: (groupid=0, jobs=1): err= 0: pid=123085: Tue Jul 23 02:24:52 2024 00:29:43.989 read: IOPS=74, BW=9549KiB/s (9778kB/s)(15.8MiB/1689msec) 00:29:43.989 slat (usec): min=10, max=121, avg=32.73, stdev=17.76 00:29:43.989 clat (usec): min=426, max=118249, avg=2641.59, stdev=10469.74 00:29:43.989 lat (usec): min=452, max=118270, avg=2674.32, stdev=10468.04 00:29:43.989 clat percentiles (usec): 00:29:43.989 | 1.00th=[ 437], 5.00th=[ 469], 10.00th=[ 498], 20.00th=[ 553], 00:29:43.989 | 30.00th=[ 586], 40.00th=[ 1004], 50.00th=[ 1319], 60.00th=[ 1532], 00:29:43.989 | 70.00th=[ 2212], 80.00th=[ 3097], 90.00th=[ 4080], 95.00th=[ 4293], 00:29:43.989 | 99.00th=[ 5866], 99.50th=[117965], 99.90th=[117965], 99.95th=[117965], 00:29:43.989 | 99.99th=[117965] 00:29:43.989 bw ( KiB/s): min= 8960, max=23296, per=100.00%, avg=16128.00, stdev=10137.08, samples=2 00:29:43.989 iops : min= 70, max= 182, avg=126.00, stdev=79.20, samples=2 00:29:43.989 write: IOPS=68, BW=8791KiB/s (9002kB/s)(14.5MiB/1689msec); 0 zone resets 00:29:43.989 slat (usec): min=64, max=603, avg=105.93, stdev=56.21 00:29:43.989 clat (msec): min=17, max=1402, avg=456.79, stdev=422.56 00:29:43.989 lat (msec): min=17, max=1403, avg=456.90, stdev=422.57 00:29:43.989 clat percentiles (msec): 00:29:43.989 | 1.00th=[ 22], 5.00th=[ 41], 10.00th=[ 66], 20.00th=[ 117], 00:29:43.989 | 30.00th=[ 153], 40.00th=[ 159], 50.00th=[ 171], 60.00th=[ 435], 00:29:43.989 | 70.00th=[ 667], 80.00th=[ 885], 90.00th=[ 1183], 95.00th=[ 1318], 00:29:43.989 | 99.00th=[ 1401], 99.50th=[ 1401], 99.90th=[ 1401], 99.95th=[ 1401], 00:29:43.989 | 99.99th=[ 1401] 00:29:43.989 bw ( KiB/s): min= 256, max=16128, per=82.50%, avg=7253.33, stdev=8100.83, samples=3 00:29:43.989 iops : min= 2, max= 126, avg=56.67, stdev=63.29, samples=3 00:29:43.989 lat (usec) : 500=5.37%, 750=14.46%, 1000=1.24% 00:29:43.989 lat (msec) : 2=14.46%, 4=10.74%, 10=5.37%, 20=0.41%, 50=2.89% 00:29:43.989 lat (msec) : 100=4.96%, 250=16.53%, 500=5.79%, 750=5.79%, 1000=4.55% 00:29:43.989 lat (msec) : 2000=7.44% 00:29:43.989 cpu : usr=0.59%, sys=0.47%, ctx=216, majf=0, minf=1 00:29:43.989 IO depths : 1=0.4%, 2=0.8%, 4=1.7%, 8=3.3%, 16=6.6%, 32=87.2%, >=64=0.0% 00:29:43.989 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:43.989 complete : 0=0.0%, 4=99.5%, 8=0.0%, 16=0.0%, 32=0.5%, 64=0.0%, >=64=0.0% 00:29:43.989 issued rwts: total=126,116,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:43.989 latency : target=0, window=0, percentile=100.00%, depth=32 00:29:43.989 00:29:43.989 Run status group 0 (all jobs): 00:29:43.989 READ: bw=9549KiB/s (9778kB/s), 9549KiB/s-9549KiB/s (9778kB/s-9778kB/s), io=15.8MiB (16.5MB), run=1689-1689msec 00:29:43.989 WRITE: bw=8791KiB/s (9002kB/s), 8791KiB/s-8791KiB/s (9002kB/s-9002kB/s), io=14.5MiB (15.2MB), run=1689-1689msec 00:29:43.989 00:29:43.989 Disk stats (read/write): 00:29:43.989 sda: ios=174/113, merge=0/0, ticks=346/41048, in_queue=41393, util=94.82% 00:29:43.989 02:24:52 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@65 -- # rm -f ./local-job0-0-verify.state 00:29:43.989 02:24:52 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@67 -- # trap - SIGINT SIGTERM EXIT 00:29:43.989 02:24:52 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@69 -- # iscsicleanup 00:29:43.989 02:24:52 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@980 -- # echo 'Cleaning up iSCSI connection' 00:29:43.989 Cleaning up iSCSI connection 00:29:43.989 02:24:52 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@981 -- # iscsiadm -m node --logout 00:29:43.989 Logging out of session [sid: 73, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] 00:29:43.989 Logout of [sid: 73, target: iqn.2016-06.io.spdk:Target3, portal: 10.0.0.1,3260] successful. 00:29:43.989 02:24:52 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@982 -- # iscsiadm -m node -o delete 00:29:43.989 02:24:52 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@983 -- # rm -rf 00:29:43.989 02:24:52 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@70 -- # rpc_cmd bdev_rbd_delete Ceph0 00:29:43.989 02:24:52 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:43.989 02:24:52 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@10 -- # set +x 00:29:43.989 [2024-07-23 02:24:52.558277] lun.c: 398:bdev_event_cb: *NOTICE*: bdev name (Ceph0) received event(SPDK_BDEV_EVENT_REMOVE) 00:29:43.989 02:24:52 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:43.989 02:24:52 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@71 -- # rpc_cmd bdev_rbd_unregister_cluster iscsi_rbd_cluster 00:29:43.989 02:24:52 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:43.989 02:24:52 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@10 -- # set +x 00:29:43.989 02:24:52 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:43.989 02:24:52 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@72 -- # killprocess 122920 00:29:43.989 02:24:52 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@948 -- # '[' -z 122920 ']' 00:29:43.989 02:24:52 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@952 -- # kill -0 122920 00:29:43.989 02:24:52 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@953 -- # uname 00:29:43.989 02:24:52 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:43.989 02:24:52 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 122920 00:29:43.989 killing process with pid 122920 00:29:43.989 02:24:52 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:29:43.989 02:24:52 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:29:43.989 02:24:52 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@966 -- # echo 'killing process with pid 122920' 00:29:43.989 02:24:52 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@967 -- # kill 122920 00:29:43.989 02:24:52 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@972 -- # wait 122920 00:29:45.892 02:24:54 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@73 -- # rbd_cleanup 00:29:45.892 02:24:54 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@1031 -- # hash ceph 00:29:45.892 02:24:54 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@1032 -- # /home/vagrant/spdk_repo/spdk/scripts/ceph/stop.sh 00:29:46.151 + base_dir=/var/tmp/ceph 00:29:46.151 + image=/var/tmp/ceph/ceph_raw.img 00:29:46.151 + dev=/dev/loop200 00:29:46.151 + pkill -9 ceph 00:29:46.151 + sleep 3 00:29:49.440 + umount /dev/loop200p2 00:29:49.440 umount: /dev/loop200p2: not mounted. 00:29:49.440 + losetup -d /dev/loop200 00:29:49.440 + rm -rf /var/tmp/ceph 00:29:49.440 02:24:57 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@1033 -- # rm -f /var/tmp/ceph_raw.img 00:29:49.440 02:24:57 iscsi_tgt.iscsi_tgt_rbd -- rbd/rbd.sh@75 -- # iscsitestfini 00:29:49.440 02:24:57 iscsi_tgt.iscsi_tgt_rbd -- iscsi_tgt/common.sh@131 -- # '[' '' == iso ']' 00:29:49.440 00:29:49.440 real 0m30.280s 00:29:49.440 user 0m32.450s 00:29:49.440 sys 0m2.171s 00:29:49.440 02:24:57 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:49.440 ************************************ 00:29:49.440 END TEST iscsi_tgt_rbd 00:29:49.440 ************************************ 00:29:49.440 02:24:57 iscsi_tgt.iscsi_tgt_rbd -- common/autotest_common.sh@10 -- # set +x 00:29:49.440 02:24:57 iscsi_tgt -- common/autotest_common.sh@1142 -- # return 0 00:29:49.440 02:24:57 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@57 -- # trap 'cleanup_veth_interfaces; exit 1' SIGINT SIGTERM EXIT 00:29:49.440 02:24:57 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@59 -- # '[' 1 -eq 1 ']' 00:29:49.440 02:24:57 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@60 -- # run_test iscsi_tgt_initiator /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/initiator/initiator.sh 00:29:49.440 02:24:57 iscsi_tgt -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:29:49.440 02:24:57 iscsi_tgt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:49.440 02:24:57 iscsi_tgt -- common/autotest_common.sh@10 -- # set +x 00:29:49.440 ************************************ 00:29:49.440 START TEST iscsi_tgt_initiator 00:29:49.440 ************************************ 00:29:49.440 02:24:57 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/initiator/initiator.sh 00:29:49.440 * Looking for test storage... 00:29:49.440 * Found test storage at /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/initiator 00:29:49.440 02:24:57 iscsi_tgt.iscsi_tgt_initiator -- initiator/initiator.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:29:49.440 02:24:57 iscsi_tgt.iscsi_tgt_initiator -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:29:49.440 02:24:57 iscsi_tgt.iscsi_tgt_initiator -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:29:49.440 02:24:57 iscsi_tgt.iscsi_tgt_initiator -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:29:49.440 02:24:57 iscsi_tgt.iscsi_tgt_initiator -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:29:49.440 02:24:57 iscsi_tgt.iscsi_tgt_initiator -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:29:49.440 02:24:57 iscsi_tgt.iscsi_tgt_initiator -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:29:49.440 02:24:57 iscsi_tgt.iscsi_tgt_initiator -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:29:49.440 02:24:57 iscsi_tgt.iscsi_tgt_initiator -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:29:49.440 02:24:57 iscsi_tgt.iscsi_tgt_initiator -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:29:49.440 02:24:57 iscsi_tgt.iscsi_tgt_initiator -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:29:49.440 02:24:57 iscsi_tgt.iscsi_tgt_initiator -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:29:49.440 02:24:57 iscsi_tgt.iscsi_tgt_initiator -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:29:49.440 02:24:57 iscsi_tgt.iscsi_tgt_initiator -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:29:49.440 02:24:57 iscsi_tgt.iscsi_tgt_initiator -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:29:49.440 02:24:57 iscsi_tgt.iscsi_tgt_initiator -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:29:49.440 02:24:57 iscsi_tgt.iscsi_tgt_initiator -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:29:49.440 02:24:57 iscsi_tgt.iscsi_tgt_initiator -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:29:49.440 02:24:57 iscsi_tgt.iscsi_tgt_initiator -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:29:49.440 02:24:57 iscsi_tgt.iscsi_tgt_initiator -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:29:49.440 02:24:57 iscsi_tgt.iscsi_tgt_initiator -- initiator/initiator.sh@11 -- # iscsitestinit 00:29:49.440 02:24:57 iscsi_tgt.iscsi_tgt_initiator -- iscsi_tgt/common.sh@109 -- # '[' '' == iso ']' 00:29:49.440 02:24:57 iscsi_tgt.iscsi_tgt_initiator -- initiator/initiator.sh@13 -- # MALLOC_BDEV_SIZE=64 00:29:49.440 02:24:57 iscsi_tgt.iscsi_tgt_initiator -- initiator/initiator.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:29:49.440 02:24:57 iscsi_tgt.iscsi_tgt_initiator -- initiator/initiator.sh@16 -- # timing_enter start_iscsi_tgt 00:29:49.440 02:24:57 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:49.440 02:24:57 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@10 -- # set +x 00:29:49.440 02:24:57 iscsi_tgt.iscsi_tgt_initiator -- initiator/initiator.sh@19 -- # pid=123234 00:29:49.440 iSCSI target launched. pid: 123234 00:29:49.440 02:24:57 iscsi_tgt.iscsi_tgt_initiator -- initiator/initiator.sh@20 -- # echo 'iSCSI target launched. pid: 123234' 00:29:49.440 02:24:57 iscsi_tgt.iscsi_tgt_initiator -- initiator/initiator.sh@21 -- # trap 'killprocess $pid;exit 1' SIGINT SIGTERM EXIT 00:29:49.440 02:24:57 iscsi_tgt.iscsi_tgt_initiator -- initiator/initiator.sh@18 -- # ip netns exec spdk_iscsi_ns /home/vagrant/spdk_repo/spdk/build/bin/iscsi_tgt -m 0x2 -p 1 -s 512 --wait-for-rpc 00:29:49.440 02:24:57 iscsi_tgt.iscsi_tgt_initiator -- initiator/initiator.sh@22 -- # waitforlisten 123234 00:29:49.440 02:24:57 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@829 -- # '[' -z 123234 ']' 00:29:49.440 02:24:57 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:49.440 02:24:57 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:49.440 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:49.440 02:24:57 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:49.440 02:24:57 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:49.440 02:24:57 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@10 -- # set +x 00:29:49.440 [2024-07-23 02:24:58.045187] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:29:49.440 [2024-07-23 02:24:58.045399] [ DPDK EAL parameters: iscsi --no-shconf -c 0x2 -m 512 --main-lcore=1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid123234 ] 00:29:49.706 [2024-07-23 02:24:58.406279] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:49.970 [2024-07-23 02:24:58.598701] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:50.228 02:24:58 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:50.228 02:24:58 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@862 -- # return 0 00:29:50.228 02:24:58 iscsi_tgt.iscsi_tgt_initiator -- initiator/initiator.sh@23 -- # rpc_cmd iscsi_set_options -o 30 -a 4 00:29:50.228 02:24:58 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:50.228 02:24:58 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@10 -- # set +x 00:29:50.228 02:24:58 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:50.228 02:24:58 iscsi_tgt.iscsi_tgt_initiator -- initiator/initiator.sh@24 -- # rpc_cmd framework_start_init 00:29:50.228 02:24:58 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:50.228 02:24:58 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@10 -- # set +x 00:29:50.796 02:24:59 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:50.796 iscsi_tgt is listening. Running tests... 00:29:50.796 02:24:59 iscsi_tgt.iscsi_tgt_initiator -- initiator/initiator.sh@25 -- # echo 'iscsi_tgt is listening. Running tests...' 00:29:50.796 02:24:59 iscsi_tgt.iscsi_tgt_initiator -- initiator/initiator.sh@27 -- # timing_exit start_iscsi_tgt 00:29:50.796 02:24:59 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:50.796 02:24:59 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@10 -- # set +x 00:29:50.796 02:24:59 iscsi_tgt.iscsi_tgt_initiator -- initiator/initiator.sh@29 -- # rpc_cmd iscsi_create_portal_group 1 10.0.0.1:3260 00:29:50.796 02:24:59 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:50.796 02:24:59 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@10 -- # set +x 00:29:50.796 02:24:59 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:50.797 02:24:59 iscsi_tgt.iscsi_tgt_initiator -- initiator/initiator.sh@30 -- # rpc_cmd iscsi_create_initiator_group 2 ANY 10.0.0.2/32 00:29:50.797 02:24:59 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:50.797 02:24:59 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@10 -- # set +x 00:29:51.056 02:24:59 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:51.056 02:24:59 iscsi_tgt.iscsi_tgt_initiator -- initiator/initiator.sh@31 -- # rpc_cmd bdev_malloc_create 64 512 00:29:51.056 02:24:59 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:51.056 02:24:59 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@10 -- # set +x 00:29:51.056 Malloc0 00:29:51.056 02:24:59 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:51.056 02:24:59 iscsi_tgt.iscsi_tgt_initiator -- initiator/initiator.sh@36 -- # rpc_cmd iscsi_create_target_node disk1 disk1_alias Malloc0:0 1:2 256 -d 00:29:51.056 02:24:59 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:51.056 02:24:59 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@10 -- # set +x 00:29:51.056 02:24:59 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:51.056 02:24:59 iscsi_tgt.iscsi_tgt_initiator -- initiator/initiator.sh@37 -- # sleep 1 00:29:51.992 02:25:00 iscsi_tgt.iscsi_tgt_initiator -- initiator/initiator.sh@38 -- # trap 'killprocess $pid; iscsitestfini; exit 1' SIGINT SIGTERM EXIT 00:29:51.992 02:25:00 iscsi_tgt.iscsi_tgt_initiator -- initiator/initiator.sh@40 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 5 -s 512 00:29:51.992 02:25:00 iscsi_tgt.iscsi_tgt_initiator -- initiator/initiator.sh@40 -- # initiator_json_config 00:29:51.992 02:25:00 iscsi_tgt.iscsi_tgt_initiator -- iscsi_tgt/common.sh@139 -- # jq . 00:29:52.251 [2024-07-23 02:25:00.797611] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:29:52.251 [2024-07-23 02:25:00.797851] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid123284 ] 00:29:52.510 [2024-07-23 02:25:01.143737] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:52.768 [2024-07-23 02:25:01.392513] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:53.027 Running I/O for 5 seconds... 00:29:58.300 00:29:58.300 Latency(us) 00:29:58.300 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:58.300 Job: iSCSI0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:29:58.300 Verification LBA range: start 0x0 length 0x4000 00:29:58.300 iSCSI0 : 5.01 16896.90 66.00 0.00 0.00 7547.42 1489.45 5302.46 00:29:58.300 =================================================================================================================== 00:29:58.300 Total : 16896.90 66.00 0.00 0.00 7547.42 1489.45 5302.46 00:29:59.237 02:25:07 iscsi_tgt.iscsi_tgt_initiator -- initiator/initiator.sh@41 -- # initiator_json_config 00:29:59.237 02:25:07 iscsi_tgt.iscsi_tgt_initiator -- initiator/initiator.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w unmap -t 5 -s 512 00:29:59.237 02:25:07 iscsi_tgt.iscsi_tgt_initiator -- iscsi_tgt/common.sh@139 -- # jq . 00:29:59.496 [2024-07-23 02:25:08.027234] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:29:59.496 [2024-07-23 02:25:08.027400] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid123370 ] 00:29:59.755 [2024-07-23 02:25:08.317273] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:59.755 [2024-07-23 02:25:08.520564] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:00.013 Running I/O for 5 seconds... 00:30:05.284 00:30:05.284 Latency(us) 00:30:05.284 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:05.284 Job: iSCSI0 (Core Mask 0x1, workload: unmap, depth: 128, IO size: 4096) 00:30:05.284 iSCSI0 : 5.00 31234.35 122.01 0.00 0.00 4093.79 1050.07 8936.73 00:30:05.284 =================================================================================================================== 00:30:05.284 Total : 31234.35 122.01 0.00 0.00 4093.79 1050.07 8936.73 00:30:06.221 02:25:14 iscsi_tgt.iscsi_tgt_initiator -- initiator/initiator.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w flush -t 5 -s 512 00:30:06.221 02:25:14 iscsi_tgt.iscsi_tgt_initiator -- initiator/initiator.sh@42 -- # initiator_json_config 00:30:06.221 02:25:14 iscsi_tgt.iscsi_tgt_initiator -- iscsi_tgt/common.sh@139 -- # jq . 00:30:06.480 [2024-07-23 02:25:15.066121] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:30:06.480 [2024-07-23 02:25:15.066320] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid123446 ] 00:30:06.739 [2024-07-23 02:25:15.363523] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:06.997 [2024-07-23 02:25:15.551006] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:06.997 Running I/O for 5 seconds... 00:30:12.269 00:30:12.269 Latency(us) 00:30:12.269 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:12.269 Job: iSCSI0 (Core Mask 0x1, workload: flush, depth: 128, IO size: 4096) 00:30:12.269 iSCSI0 : 5.00 56492.35 220.67 0.00 0.00 2262.91 848.99 2844.86 00:30:12.269 =================================================================================================================== 00:30:12.269 Total : 56492.35 220.67 0.00 0.00 2262.91 848.99 2844.86 00:30:13.220 02:25:21 iscsi_tgt.iscsi_tgt_initiator -- initiator/initiator.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w reset -t 10 -s 512 00:30:13.220 02:25:21 iscsi_tgt.iscsi_tgt_initiator -- initiator/initiator.sh@43 -- # initiator_json_config 00:30:13.220 02:25:21 iscsi_tgt.iscsi_tgt_initiator -- iscsi_tgt/common.sh@139 -- # jq . 00:30:13.490 [2024-07-23 02:25:22.098295] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:30:13.490 [2024-07-23 02:25:22.098517] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 -m 512 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid123519 ] 00:30:13.749 [2024-07-23 02:25:22.398600] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:14.007 [2024-07-23 02:25:22.577632] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:14.265 Running I/O for 10 seconds... 00:30:24.241 00:30:24.241 Latency(us) 00:30:24.241 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:24.241 Job: iSCSI0 (Core Mask 0x1, workload: reset, depth: 128, IO size: 4096) 00:30:24.241 Verification LBA range: start 0x0 length 0x4000 00:30:24.241 iSCSI0 : 10.01 17087.60 66.75 0.00 0.00 7463.83 1467.11 5838.66 00:30:24.241 =================================================================================================================== 00:30:24.241 Total : 17087.60 66.75 0.00 0.00 7463.83 1467.11 5838.66 00:30:25.619 02:25:34 iscsi_tgt.iscsi_tgt_initiator -- initiator/initiator.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:30:25.619 02:25:34 iscsi_tgt.iscsi_tgt_initiator -- initiator/initiator.sh@47 -- # killprocess 123234 00:30:25.619 02:25:34 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@948 -- # '[' -z 123234 ']' 00:30:25.619 02:25:34 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@952 -- # kill -0 123234 00:30:25.619 02:25:34 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@953 -- # uname 00:30:25.619 02:25:34 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:25.619 02:25:34 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 123234 00:30:25.619 02:25:34 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:30:25.619 02:25:34 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:30:25.619 killing process with pid 123234 00:30:25.619 02:25:34 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@966 -- # echo 'killing process with pid 123234' 00:30:25.619 02:25:34 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@967 -- # kill 123234 00:30:25.619 02:25:34 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@972 -- # wait 123234 00:30:27.524 02:25:36 iscsi_tgt.iscsi_tgt_initiator -- initiator/initiator.sh@49 -- # iscsitestfini 00:30:27.524 02:25:36 iscsi_tgt.iscsi_tgt_initiator -- iscsi_tgt/common.sh@131 -- # '[' '' == iso ']' 00:30:27.524 00:30:27.524 real 0m38.362s 00:30:27.524 user 0m54.144s 00:30:27.524 sys 0m13.058s 00:30:27.524 02:25:36 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:27.524 02:25:36 iscsi_tgt.iscsi_tgt_initiator -- common/autotest_common.sh@10 -- # set +x 00:30:27.524 ************************************ 00:30:27.524 END TEST iscsi_tgt_initiator 00:30:27.524 ************************************ 00:30:27.524 02:25:36 iscsi_tgt -- common/autotest_common.sh@1142 -- # return 0 00:30:27.525 02:25:36 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@61 -- # run_test iscsi_tgt_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/bdev_io_wait/bdev_io_wait.sh 00:30:27.525 02:25:36 iscsi_tgt -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:30:27.525 02:25:36 iscsi_tgt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:27.525 02:25:36 iscsi_tgt -- common/autotest_common.sh@10 -- # set +x 00:30:27.525 ************************************ 00:30:27.525 START TEST iscsi_tgt_bdev_io_wait 00:30:27.525 ************************************ 00:30:27.525 02:25:36 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/bdev_io_wait/bdev_io_wait.sh 00:30:27.784 * Looking for test storage... 00:30:27.784 * Found test storage at /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/bdev_io_wait 00:30:27.784 02:25:36 iscsi_tgt.iscsi_tgt_bdev_io_wait -- bdev_io_wait/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:30:27.784 02:25:36 iscsi_tgt.iscsi_tgt_bdev_io_wait -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:30:27.784 02:25:36 iscsi_tgt.iscsi_tgt_bdev_io_wait -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:30:27.784 02:25:36 iscsi_tgt.iscsi_tgt_bdev_io_wait -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:30:27.784 02:25:36 iscsi_tgt.iscsi_tgt_bdev_io_wait -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:30:27.784 02:25:36 iscsi_tgt.iscsi_tgt_bdev_io_wait -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:30:27.784 02:25:36 iscsi_tgt.iscsi_tgt_bdev_io_wait -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:30:27.784 02:25:36 iscsi_tgt.iscsi_tgt_bdev_io_wait -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:30:27.784 02:25:36 iscsi_tgt.iscsi_tgt_bdev_io_wait -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:30:27.784 02:25:36 iscsi_tgt.iscsi_tgt_bdev_io_wait -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:30:27.784 02:25:36 iscsi_tgt.iscsi_tgt_bdev_io_wait -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:30:27.784 02:25:36 iscsi_tgt.iscsi_tgt_bdev_io_wait -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:30:27.784 02:25:36 iscsi_tgt.iscsi_tgt_bdev_io_wait -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:30:27.784 02:25:36 iscsi_tgt.iscsi_tgt_bdev_io_wait -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:30:27.784 02:25:36 iscsi_tgt.iscsi_tgt_bdev_io_wait -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:30:27.784 02:25:36 iscsi_tgt.iscsi_tgt_bdev_io_wait -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:30:27.784 02:25:36 iscsi_tgt.iscsi_tgt_bdev_io_wait -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:30:27.784 02:25:36 iscsi_tgt.iscsi_tgt_bdev_io_wait -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:30:27.784 02:25:36 iscsi_tgt.iscsi_tgt_bdev_io_wait -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:30:27.784 02:25:36 iscsi_tgt.iscsi_tgt_bdev_io_wait -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:30:27.784 02:25:36 iscsi_tgt.iscsi_tgt_bdev_io_wait -- bdev_io_wait/bdev_io_wait.sh@11 -- # iscsitestinit 00:30:27.784 02:25:36 iscsi_tgt.iscsi_tgt_bdev_io_wait -- iscsi_tgt/common.sh@109 -- # '[' '' == iso ']' 00:30:27.784 02:25:36 iscsi_tgt.iscsi_tgt_bdev_io_wait -- bdev_io_wait/bdev_io_wait.sh@13 -- # MALLOC_BDEV_SIZE=64 00:30:27.784 02:25:36 iscsi_tgt.iscsi_tgt_bdev_io_wait -- bdev_io_wait/bdev_io_wait.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:30:27.784 02:25:36 iscsi_tgt.iscsi_tgt_bdev_io_wait -- bdev_io_wait/bdev_io_wait.sh@16 -- # timing_enter start_iscsi_tgt 00:30:27.784 02:25:36 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:27.784 02:25:36 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:27.784 02:25:36 iscsi_tgt.iscsi_tgt_bdev_io_wait -- bdev_io_wait/bdev_io_wait.sh@19 -- # pid=123710 00:30:27.784 iSCSI target launched. pid: 123710 00:30:27.784 02:25:36 iscsi_tgt.iscsi_tgt_bdev_io_wait -- bdev_io_wait/bdev_io_wait.sh@20 -- # echo 'iSCSI target launched. pid: 123710' 00:30:27.784 02:25:36 iscsi_tgt.iscsi_tgt_bdev_io_wait -- bdev_io_wait/bdev_io_wait.sh@21 -- # trap 'killprocess $pid; iscsitestfini; exit 1' SIGINT SIGTERM EXIT 00:30:27.784 02:25:36 iscsi_tgt.iscsi_tgt_bdev_io_wait -- bdev_io_wait/bdev_io_wait.sh@22 -- # waitforlisten 123710 00:30:27.784 02:25:36 iscsi_tgt.iscsi_tgt_bdev_io_wait -- bdev_io_wait/bdev_io_wait.sh@18 -- # ip netns exec spdk_iscsi_ns /home/vagrant/spdk_repo/spdk/build/bin/iscsi_tgt -m 0x2 -p 1 -s 512 --wait-for-rpc 00:30:27.784 02:25:36 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@829 -- # '[' -z 123710 ']' 00:30:27.784 02:25:36 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:27.784 02:25:36 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:27.784 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:27.784 02:25:36 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:27.784 02:25:36 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:27.784 02:25:36 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:27.784 [2024-07-23 02:25:36.493299] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:30:27.784 [2024-07-23 02:25:36.493573] [ DPDK EAL parameters: iscsi --no-shconf -c 0x2 -m 512 --main-lcore=1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid123710 ] 00:30:28.353 [2024-07-23 02:25:36.853078] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:28.353 [2024-07-23 02:25:37.046795] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:28.612 02:25:37 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:28.612 02:25:37 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@862 -- # return 0 00:30:28.612 02:25:37 iscsi_tgt.iscsi_tgt_bdev_io_wait -- bdev_io_wait/bdev_io_wait.sh@23 -- # rpc_cmd iscsi_set_options -o 30 -a 4 00:30:28.612 02:25:37 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:28.612 02:25:37 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:28.612 02:25:37 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:28.612 02:25:37 iscsi_tgt.iscsi_tgt_bdev_io_wait -- bdev_io_wait/bdev_io_wait.sh@25 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:30:28.612 02:25:37 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:28.612 02:25:37 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:28.612 02:25:37 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:28.612 02:25:37 iscsi_tgt.iscsi_tgt_bdev_io_wait -- bdev_io_wait/bdev_io_wait.sh@26 -- # rpc_cmd framework_start_init 00:30:28.612 02:25:37 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:28.612 02:25:37 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:29.181 02:25:37 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:29.181 iscsi_tgt is listening. Running tests... 00:30:29.181 02:25:37 iscsi_tgt.iscsi_tgt_bdev_io_wait -- bdev_io_wait/bdev_io_wait.sh@27 -- # echo 'iscsi_tgt is listening. Running tests...' 00:30:29.181 02:25:37 iscsi_tgt.iscsi_tgt_bdev_io_wait -- bdev_io_wait/bdev_io_wait.sh@29 -- # timing_exit start_iscsi_tgt 00:30:29.181 02:25:37 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:29.181 02:25:37 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:29.181 02:25:37 iscsi_tgt.iscsi_tgt_bdev_io_wait -- bdev_io_wait/bdev_io_wait.sh@31 -- # rpc_cmd iscsi_create_portal_group 1 10.0.0.1:3260 00:30:29.181 02:25:37 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:29.181 02:25:37 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:29.181 02:25:37 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:29.181 02:25:37 iscsi_tgt.iscsi_tgt_bdev_io_wait -- bdev_io_wait/bdev_io_wait.sh@32 -- # rpc_cmd iscsi_create_initiator_group 2 ANY 10.0.0.2/32 00:30:29.181 02:25:37 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:29.181 02:25:37 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:29.181 02:25:37 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:29.181 02:25:37 iscsi_tgt.iscsi_tgt_bdev_io_wait -- bdev_io_wait/bdev_io_wait.sh@33 -- # rpc_cmd bdev_malloc_create 64 512 00:30:29.181 02:25:37 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:29.181 02:25:37 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:29.181 Malloc0 00:30:29.181 02:25:37 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:29.181 02:25:37 iscsi_tgt.iscsi_tgt_bdev_io_wait -- bdev_io_wait/bdev_io_wait.sh@38 -- # rpc_cmd iscsi_create_target_node disk1 disk1_alias Malloc0:0 1:2 256 -d 00:30:29.181 02:25:37 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:29.181 02:25:37 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:29.181 02:25:37 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:29.181 02:25:37 iscsi_tgt.iscsi_tgt_bdev_io_wait -- bdev_io_wait/bdev_io_wait.sh@39 -- # sleep 1 00:30:30.558 02:25:38 iscsi_tgt.iscsi_tgt_bdev_io_wait -- bdev_io_wait/bdev_io_wait.sh@40 -- # trap 'killprocess $pid; iscsitestfini; exit 1' SIGINT SIGTERM EXIT 00:30:30.558 02:25:38 iscsi_tgt.iscsi_tgt_bdev_io_wait -- bdev_io_wait/bdev_io_wait.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w write -t 1 00:30:30.558 02:25:38 iscsi_tgt.iscsi_tgt_bdev_io_wait -- bdev_io_wait/bdev_io_wait.sh@42 -- # initiator_json_config 00:30:30.558 02:25:38 iscsi_tgt.iscsi_tgt_bdev_io_wait -- iscsi_tgt/common.sh@139 -- # jq . 00:30:30.558 [2024-07-23 02:25:39.054792] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:30:30.558 [2024-07-23 02:25:39.055016] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid123755 ] 00:30:30.558 [2024-07-23 02:25:39.216721] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:30.817 [2024-07-23 02:25:39.475300] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:31.076 Running I/O for 1 seconds... 00:30:32.450 00:30:32.450 Latency(us) 00:30:32.450 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:32.450 Job: iSCSI0 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:30:32.450 iSCSI0 : 1.00 24905.94 97.29 0.00 0.00 5123.23 1638.40 7268.54 00:30:32.450 =================================================================================================================== 00:30:32.450 Total : 24905.94 97.29 0.00 0.00 5123.23 1638.40 7268.54 00:30:33.384 02:25:41 iscsi_tgt.iscsi_tgt_bdev_io_wait -- bdev_io_wait/bdev_io_wait.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w read -t 1 00:30:33.384 02:25:41 iscsi_tgt.iscsi_tgt_bdev_io_wait -- bdev_io_wait/bdev_io_wait.sh@43 -- # initiator_json_config 00:30:33.384 02:25:41 iscsi_tgt.iscsi_tgt_bdev_io_wait -- iscsi_tgt/common.sh@139 -- # jq . 00:30:33.384 [2024-07-23 02:25:41.950069] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:30:33.384 [2024-07-23 02:25:41.950264] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid123788 ] 00:30:33.384 [2024-07-23 02:25:42.106096] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:33.642 [2024-07-23 02:25:42.299938] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:33.901 Running I/O for 1 seconds... 00:30:34.834 00:30:34.834 Latency(us) 00:30:34.834 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:34.834 Job: iSCSI0 (Core Mask 0x1, workload: read, depth: 128, IO size: 4096) 00:30:34.834 iSCSI0 : 1.00 27929.57 109.10 0.00 0.00 4569.95 1206.46 5451.40 00:30:34.834 =================================================================================================================== 00:30:34.834 Total : 27929.57 109.10 0.00 0.00 4569.95 1206.46 5451.40 00:30:36.210 02:25:44 iscsi_tgt.iscsi_tgt_bdev_io_wait -- bdev_io_wait/bdev_io_wait.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w flush -t 1 00:30:36.210 02:25:44 iscsi_tgt.iscsi_tgt_bdev_io_wait -- bdev_io_wait/bdev_io_wait.sh@44 -- # initiator_json_config 00:30:36.210 02:25:44 iscsi_tgt.iscsi_tgt_bdev_io_wait -- iscsi_tgt/common.sh@139 -- # jq . 00:30:36.210 [2024-07-23 02:25:44.721980] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:30:36.210 [2024-07-23 02:25:44.722244] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid123821 ] 00:30:36.210 [2024-07-23 02:25:44.870602] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:36.470 [2024-07-23 02:25:45.075999] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:36.728 Running I/O for 1 seconds... 00:30:37.664 00:30:37.664 Latency(us) 00:30:37.664 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:37.664 Job: iSCSI0 (Core Mask 0x1, workload: flush, depth: 128, IO size: 4096) 00:30:37.664 iSCSI0 : 1.00 35114.20 137.16 0.00 0.00 3637.00 1042.62 4230.05 00:30:37.664 =================================================================================================================== 00:30:37.664 Total : 35114.20 137.16 0.00 0.00 3637.00 1042.62 4230.05 00:30:38.599 02:25:47 iscsi_tgt.iscsi_tgt_bdev_io_wait -- bdev_io_wait/bdev_io_wait.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w unmap -t 1 00:30:38.599 02:25:47 iscsi_tgt.iscsi_tgt_bdev_io_wait -- bdev_io_wait/bdev_io_wait.sh@45 -- # initiator_json_config 00:30:38.599 02:25:47 iscsi_tgt.iscsi_tgt_bdev_io_wait -- iscsi_tgt/common.sh@139 -- # jq . 00:30:38.858 [2024-07-23 02:25:47.511046] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:30:38.858 [2024-07-23 02:25:47.511323] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid123854 ] 00:30:39.117 [2024-07-23 02:25:47.679478] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:39.117 [2024-07-23 02:25:47.870773] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:39.685 Running I/O for 1 seconds... 00:30:40.622 00:30:40.622 Latency(us) 00:30:40.622 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:40.622 Job: iSCSI0 (Core Mask 0x1, workload: unmap, depth: 128, IO size: 4096) 00:30:40.622 iSCSI0 : 1.00 20018.01 78.20 0.00 0.00 6376.44 1005.38 7923.90 00:30:40.622 =================================================================================================================== 00:30:40.622 Total : 20018.01 78.20 0.00 0.00 6376.44 1005.38 7923.90 00:30:41.559 02:25:50 iscsi_tgt.iscsi_tgt_bdev_io_wait -- bdev_io_wait/bdev_io_wait.sh@47 -- # trap - SIGINT SIGTERM EXIT 00:30:41.559 02:25:50 iscsi_tgt.iscsi_tgt_bdev_io_wait -- bdev_io_wait/bdev_io_wait.sh@49 -- # killprocess 123710 00:30:41.559 02:25:50 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@948 -- # '[' -z 123710 ']' 00:30:41.559 02:25:50 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@952 -- # kill -0 123710 00:30:41.559 02:25:50 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@953 -- # uname 00:30:41.559 02:25:50 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:41.559 02:25:50 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 123710 00:30:41.559 02:25:50 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:30:41.559 02:25:50 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:30:41.559 killing process with pid 123710 00:30:41.559 02:25:50 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@966 -- # echo 'killing process with pid 123710' 00:30:41.559 02:25:50 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@967 -- # kill 123710 00:30:41.559 02:25:50 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@972 -- # wait 123710 00:30:43.463 02:25:52 iscsi_tgt.iscsi_tgt_bdev_io_wait -- bdev_io_wait/bdev_io_wait.sh@51 -- # iscsitestfini 00:30:43.463 02:25:52 iscsi_tgt.iscsi_tgt_bdev_io_wait -- iscsi_tgt/common.sh@131 -- # '[' '' == iso ']' 00:30:43.463 00:30:43.463 real 0m15.922s 00:30:43.463 user 0m22.774s 00:30:43.463 sys 0m3.694s 00:30:43.463 02:25:52 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:43.463 02:25:52 iscsi_tgt.iscsi_tgt_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:43.463 ************************************ 00:30:43.463 END TEST iscsi_tgt_bdev_io_wait 00:30:43.463 ************************************ 00:30:43.463 02:25:52 iscsi_tgt -- common/autotest_common.sh@1142 -- # return 0 00:30:43.463 02:25:52 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@62 -- # run_test iscsi_tgt_resize /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/resize/resize.sh 00:30:43.463 02:25:52 iscsi_tgt -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:30:43.463 02:25:52 iscsi_tgt -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:43.463 02:25:52 iscsi_tgt -- common/autotest_common.sh@10 -- # set +x 00:30:43.463 ************************************ 00:30:43.463 START TEST iscsi_tgt_resize 00:30:43.463 ************************************ 00:30:43.463 02:25:52 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/resize/resize.sh 00:30:43.722 * Looking for test storage... 00:30:43.722 * Found test storage at /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/resize 00:30:43.722 02:25:52 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:30:43.722 02:25:52 iscsi_tgt.iscsi_tgt_resize -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:30:43.722 02:25:52 iscsi_tgt.iscsi_tgt_resize -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:30:43.722 02:25:52 iscsi_tgt.iscsi_tgt_resize -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:30:43.722 02:25:52 iscsi_tgt.iscsi_tgt_resize -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:30:43.722 02:25:52 iscsi_tgt.iscsi_tgt_resize -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:30:43.722 02:25:52 iscsi_tgt.iscsi_tgt_resize -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:30:43.722 02:25:52 iscsi_tgt.iscsi_tgt_resize -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:30:43.722 02:25:52 iscsi_tgt.iscsi_tgt_resize -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:30:43.723 02:25:52 iscsi_tgt.iscsi_tgt_resize -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:30:43.723 02:25:52 iscsi_tgt.iscsi_tgt_resize -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:30:43.723 02:25:52 iscsi_tgt.iscsi_tgt_resize -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:30:43.723 02:25:52 iscsi_tgt.iscsi_tgt_resize -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:30:43.723 02:25:52 iscsi_tgt.iscsi_tgt_resize -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:30:43.723 02:25:52 iscsi_tgt.iscsi_tgt_resize -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:30:43.723 02:25:52 iscsi_tgt.iscsi_tgt_resize -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:30:43.723 02:25:52 iscsi_tgt.iscsi_tgt_resize -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:30:43.723 02:25:52 iscsi_tgt.iscsi_tgt_resize -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:30:43.723 02:25:52 iscsi_tgt.iscsi_tgt_resize -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:30:43.723 02:25:52 iscsi_tgt.iscsi_tgt_resize -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:30:43.723 02:25:52 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@12 -- # iscsitestinit 00:30:43.723 02:25:52 iscsi_tgt.iscsi_tgt_resize -- iscsi_tgt/common.sh@109 -- # '[' '' == iso ']' 00:30:43.723 02:25:52 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@14 -- # BDEV_SIZE=64 00:30:43.723 02:25:52 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@15 -- # BDEV_NEW_SIZE=128 00:30:43.723 02:25:52 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@16 -- # BLOCK_SIZE=512 00:30:43.723 02:25:52 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@17 -- # RESIZE_SOCK=/var/tmp/spdk-resize.sock 00:30:43.723 02:25:52 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@19 -- # timing_enter start_iscsi_tgt 00:30:43.723 02:25:52 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:43.723 02:25:52 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@10 -- # set +x 00:30:43.723 02:25:52 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@22 -- # rm -f /var/tmp/spdk-resize.sock 00:30:43.723 iSCSI target launched. pid: 123956 00:30:43.723 02:25:52 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@25 -- # pid=123956 00:30:43.723 02:25:52 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@26 -- # echo 'iSCSI target launched. pid: 123956' 00:30:43.723 02:25:52 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@24 -- # ip netns exec spdk_iscsi_ns /home/vagrant/spdk_repo/spdk/build/bin/iscsi_tgt -m 0x2 -p 1 -s 512 --wait-for-rpc 00:30:43.723 02:25:52 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@27 -- # trap 'killprocess $pid; iscsitestfini; exit 1' SIGINT SIGTERM EXIT 00:30:43.723 02:25:52 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@28 -- # waitforlisten 123956 00:30:43.723 02:25:52 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@829 -- # '[' -z 123956 ']' 00:30:43.723 02:25:52 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:43.723 02:25:52 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:43.723 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:43.723 02:25:52 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:43.723 02:25:52 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:43.723 02:25:52 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@10 -- # set +x 00:30:43.723 [2024-07-23 02:25:52.454368] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:30:43.723 [2024-07-23 02:25:52.454617] [ DPDK EAL parameters: iscsi --no-shconf -c 0x2 -m 512 --main-lcore=1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid123956 ] 00:30:44.291 [2024-07-23 02:25:52.822218] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:44.291 [2024-07-23 02:25:53.066766] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:44.550 02:25:53 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:44.550 02:25:53 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@862 -- # return 0 00:30:44.550 02:25:53 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@29 -- # rpc_cmd framework_start_init 00:30:44.550 02:25:53 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:44.550 02:25:53 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@10 -- # set +x 00:30:45.118 02:25:53 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:45.118 iscsi_tgt is listening. Running tests... 00:30:45.118 02:25:53 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@30 -- # echo 'iscsi_tgt is listening. Running tests...' 00:30:45.118 02:25:53 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@32 -- # timing_exit start_iscsi_tgt 00:30:45.118 02:25:53 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:45.118 02:25:53 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@10 -- # set +x 00:30:45.377 02:25:53 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@34 -- # rpc_cmd iscsi_create_portal_group 1 10.0.0.1:3260 00:30:45.377 02:25:53 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:45.377 02:25:53 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@10 -- # set +x 00:30:45.377 02:25:53 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:45.377 02:25:53 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@35 -- # rpc_cmd iscsi_create_initiator_group 2 ANY 10.0.0.2/32 00:30:45.377 02:25:53 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:45.377 02:25:53 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@10 -- # set +x 00:30:45.377 02:25:53 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:45.377 02:25:53 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@36 -- # rpc_cmd bdev_null_create Null0 64 512 00:30:45.377 02:25:53 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:45.377 02:25:53 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@10 -- # set +x 00:30:45.377 Null0 00:30:45.377 02:25:53 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:45.377 02:25:53 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@41 -- # rpc_cmd iscsi_create_target_node disk1 disk1_alias Null0:0 1:2 256 -d 00:30:45.377 02:25:53 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:45.377 02:25:53 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@10 -- # set +x 00:30:45.377 02:25:53 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:45.377 02:25:53 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@42 -- # sleep 1 00:30:46.313 02:25:54 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@43 -- # trap 'killprocess $pid; iscsitestfini; exit 1' SIGINT SIGTERM EXIT 00:30:46.313 02:25:54 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@47 -- # bdevperf_pid=123999 00:30:46.313 02:25:54 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@48 -- # waitforlisten 123999 /var/tmp/spdk-resize.sock 00:30:46.313 02:25:54 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@829 -- # '[' -z 123999 ']' 00:30:46.313 02:25:54 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-resize.sock 00:30:46.313 02:25:54 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:46.313 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-resize.sock... 00:30:46.313 02:25:54 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-resize.sock...' 00:30:46.313 02:25:54 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:46.313 02:25:54 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@10 -- # set +x 00:30:46.313 02:25:54 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@46 -- # initiator_json_config 00:30:46.313 02:25:54 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-resize.sock --json /dev/fd/63 -q 16 -o 4096 -w read -t 5 -R -s 128 -z 00:30:46.313 02:25:54 iscsi_tgt.iscsi_tgt_resize -- iscsi_tgt/common.sh@139 -- # jq . 00:30:46.573 [2024-07-23 02:25:55.098221] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:30:46.573 [2024-07-23 02:25:55.098435] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 -m 128 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid123999 ] 00:30:46.573 [2024-07-23 02:25:55.298665] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:46.832 [2024-07-23 02:25:55.487977] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:47.400 02:25:55 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:47.400 02:25:55 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@862 -- # return 0 00:30:47.400 02:25:55 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@50 -- # rpc_cmd bdev_null_resize Null0 128 00:30:47.400 02:25:55 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:47.400 02:25:55 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@10 -- # set +x 00:30:47.400 [2024-07-23 02:25:55.952047] lun.c: 402:bdev_event_cb: *NOTICE*: bdev name (Null0) received event(SPDK_BDEV_EVENT_RESIZE) 00:30:47.400 true 00:30:47.400 02:25:55 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:47.400 02:25:55 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@52 -- # rpc_cmd -s /var/tmp/spdk-resize.sock bdev_get_bdevs 00:30:47.400 02:25:55 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:47.400 02:25:55 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@10 -- # set +x 00:30:47.400 02:25:55 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@52 -- # jq '.[].num_blocks' 00:30:47.400 02:25:55 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:47.400 02:25:56 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@52 -- # num_block=131072 00:30:47.400 02:25:56 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@54 -- # total_size=64 00:30:47.400 02:25:56 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@55 -- # '[' 64 '!=' 64 ']' 00:30:47.400 02:25:56 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@59 -- # sleep 2 00:30:49.304 02:25:58 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@61 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-resize.sock perform_tests 00:30:49.563 Running I/O for 5 seconds... 00:30:54.832 00:30:54.832 Latency(us) 00:30:54.832 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:54.832 Job: iSCSI0 (Core Mask 0x1, workload: read, depth: 16, IO size: 4096) 00:30:54.832 iSCSI0 : 5.00 36398.97 142.18 0.00 0.00 436.35 240.17 1124.54 00:30:54.832 =================================================================================================================== 00:30:54.833 Total : 36398.97 142.18 0.00 0.00 436.35 240.17 1124.54 00:30:54.833 0 00:30:54.833 02:26:03 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@63 -- # jq '.[].num_blocks' 00:30:54.833 02:26:03 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@63 -- # rpc_cmd -s /var/tmp/spdk-resize.sock bdev_get_bdevs 00:30:54.833 02:26:03 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:54.833 02:26:03 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@10 -- # set +x 00:30:54.833 02:26:03 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:54.833 02:26:03 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@63 -- # num_block=262144 00:30:54.833 02:26:03 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@65 -- # total_size=128 00:30:54.833 02:26:03 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@66 -- # '[' 128 '!=' 128 ']' 00:30:54.833 02:26:03 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:30:54.833 02:26:03 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@72 -- # killprocess 123999 00:30:54.833 02:26:03 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@948 -- # '[' -z 123999 ']' 00:30:54.833 02:26:03 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@952 -- # kill -0 123999 00:30:54.833 02:26:03 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@953 -- # uname 00:30:54.833 02:26:03 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:54.833 02:26:03 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 123999 00:30:54.833 02:26:03 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:30:54.833 02:26:03 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:30:54.833 killing process with pid 123999 00:30:54.833 02:26:03 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@966 -- # echo 'killing process with pid 123999' 00:30:54.833 Received shutdown signal, test time was about 5.000000 seconds 00:30:54.833 00:30:54.833 Latency(us) 00:30:54.833 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:54.833 =================================================================================================================== 00:30:54.833 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:54.833 02:26:03 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@967 -- # kill 123999 00:30:54.833 02:26:03 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@972 -- # wait 123999 00:30:55.808 02:26:04 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@73 -- # killprocess 123956 00:30:55.808 02:26:04 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@948 -- # '[' -z 123956 ']' 00:30:55.808 02:26:04 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@952 -- # kill -0 123956 00:30:55.808 02:26:04 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@953 -- # uname 00:30:55.808 02:26:04 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:55.808 02:26:04 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 123956 00:30:55.808 02:26:04 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:30:55.808 02:26:04 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:30:55.808 killing process with pid 123956 00:30:55.808 02:26:04 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@966 -- # echo 'killing process with pid 123956' 00:30:55.808 02:26:04 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@967 -- # kill 123956 00:30:55.808 02:26:04 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@972 -- # wait 123956 00:30:57.715 02:26:06 iscsi_tgt.iscsi_tgt_resize -- resize/resize.sh@75 -- # iscsitestfini 00:30:57.715 02:26:06 iscsi_tgt.iscsi_tgt_resize -- iscsi_tgt/common.sh@131 -- # '[' '' == iso ']' 00:30:57.715 00:30:57.715 real 0m14.059s 00:30:57.715 user 0m19.332s 00:30:57.715 sys 0m3.867s 00:30:57.716 02:26:06 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:57.716 02:26:06 iscsi_tgt.iscsi_tgt_resize -- common/autotest_common.sh@10 -- # set +x 00:30:57.716 ************************************ 00:30:57.716 END TEST iscsi_tgt_resize 00:30:57.716 ************************************ 00:30:57.716 02:26:06 iscsi_tgt -- common/autotest_common.sh@1142 -- # return 0 00:30:57.716 02:26:06 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@65 -- # cleanup_veth_interfaces 00:30:57.716 02:26:06 iscsi_tgt -- iscsi_tgt/common.sh@95 -- # ip link set init_br nomaster 00:30:57.716 02:26:06 iscsi_tgt -- iscsi_tgt/common.sh@96 -- # ip link set tgt_br nomaster 00:30:57.716 02:26:06 iscsi_tgt -- iscsi_tgt/common.sh@97 -- # ip link set tgt_br2 nomaster 00:30:57.716 02:26:06 iscsi_tgt -- iscsi_tgt/common.sh@98 -- # ip link set init_br down 00:30:57.716 02:26:06 iscsi_tgt -- iscsi_tgt/common.sh@99 -- # ip link set tgt_br down 00:30:57.716 02:26:06 iscsi_tgt -- iscsi_tgt/common.sh@100 -- # ip link set tgt_br2 down 00:30:57.716 02:26:06 iscsi_tgt -- iscsi_tgt/common.sh@101 -- # ip link delete iscsi_br type bridge 00:30:57.716 02:26:06 iscsi_tgt -- iscsi_tgt/common.sh@102 -- # ip link delete spdk_init_int 00:30:57.716 02:26:06 iscsi_tgt -- iscsi_tgt/common.sh@103 -- # ip netns exec spdk_iscsi_ns ip link delete spdk_tgt_int 00:30:57.716 02:26:06 iscsi_tgt -- iscsi_tgt/common.sh@104 -- # ip netns exec spdk_iscsi_ns ip link delete spdk_tgt_int2 00:30:57.716 02:26:06 iscsi_tgt -- iscsi_tgt/common.sh@105 -- # ip netns del spdk_iscsi_ns 00:30:57.716 02:26:06 iscsi_tgt -- iscsi_tgt/iscsi_tgt.sh@66 -- # trap - SIGINT SIGTERM EXIT 00:30:57.716 00:30:57.716 real 23m40.807s 00:30:57.716 user 40m50.863s 00:30:57.716 sys 7m32.591s 00:30:57.716 02:26:06 iscsi_tgt -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:57.716 02:26:06 iscsi_tgt -- common/autotest_common.sh@10 -- # set +x 00:30:57.716 ************************************ 00:30:57.716 END TEST iscsi_tgt 00:30:57.716 ************************************ 00:30:57.975 02:26:06 -- common/autotest_common.sh@1142 -- # return 0 00:30:57.975 02:26:06 -- spdk/autotest.sh@264 -- # run_test spdkcli_iscsi /home/vagrant/spdk_repo/spdk/test/spdkcli/iscsi.sh 00:30:57.975 02:26:06 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:30:57.975 02:26:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:57.975 02:26:06 -- common/autotest_common.sh@10 -- # set +x 00:30:57.975 ************************************ 00:30:57.975 START TEST spdkcli_iscsi 00:30:57.975 ************************************ 00:30:57.975 02:26:06 spdkcli_iscsi -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/iscsi.sh 00:30:57.975 * Looking for test storage... 00:30:57.975 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:30:57.975 02:26:06 spdkcli_iscsi -- spdkcli/iscsi.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:30:57.975 02:26:06 spdkcli_iscsi -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:30:57.975 02:26:06 spdkcli_iscsi -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:30:57.975 02:26:06 spdkcli_iscsi -- spdkcli/iscsi.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:30:57.975 02:26:06 spdkcli_iscsi -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:30:57.975 02:26:06 spdkcli_iscsi -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:30:57.975 02:26:06 spdkcli_iscsi -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:30:57.975 02:26:06 spdkcli_iscsi -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:30:57.975 02:26:06 spdkcli_iscsi -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:30:57.975 02:26:06 spdkcli_iscsi -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:30:57.975 02:26:06 spdkcli_iscsi -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:30:57.975 02:26:06 spdkcli_iscsi -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:30:57.975 02:26:06 spdkcli_iscsi -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:30:57.975 02:26:06 spdkcli_iscsi -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:30:57.975 02:26:06 spdkcli_iscsi -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:30:57.975 02:26:06 spdkcli_iscsi -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:30:57.975 02:26:06 spdkcli_iscsi -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:30:57.975 02:26:06 spdkcli_iscsi -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:30:57.975 02:26:06 spdkcli_iscsi -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:30:57.975 02:26:06 spdkcli_iscsi -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:30:57.975 02:26:06 spdkcli_iscsi -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:30:57.975 02:26:06 spdkcli_iscsi -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:30:57.975 02:26:06 spdkcli_iscsi -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:30:57.975 02:26:06 spdkcli_iscsi -- spdkcli/iscsi.sh@12 -- # MATCH_FILE=spdkcli_iscsi.test 00:30:57.975 02:26:06 spdkcli_iscsi -- spdkcli/iscsi.sh@13 -- # SPDKCLI_BRANCH=/iscsi 00:30:57.975 02:26:06 spdkcli_iscsi -- spdkcli/iscsi.sh@15 -- # trap cleanup EXIT 00:30:57.975 02:26:06 spdkcli_iscsi -- spdkcli/iscsi.sh@17 -- # timing_enter run_iscsi_tgt 00:30:57.975 02:26:06 spdkcli_iscsi -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:57.975 02:26:06 spdkcli_iscsi -- common/autotest_common.sh@10 -- # set +x 00:30:57.975 02:26:06 spdkcli_iscsi -- spdkcli/iscsi.sh@21 -- # iscsi_tgt_pid=124247 00:30:57.975 02:26:06 spdkcli_iscsi -- spdkcli/iscsi.sh@22 -- # waitforlisten 124247 00:30:57.975 02:26:06 spdkcli_iscsi -- common/autotest_common.sh@829 -- # '[' -z 124247 ']' 00:30:57.975 02:26:06 spdkcli_iscsi -- spdkcli/iscsi.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/iscsi_tgt -m 0x3 -p 0 --wait-for-rpc 00:30:57.975 02:26:06 spdkcli_iscsi -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:57.975 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:57.975 02:26:06 spdkcli_iscsi -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:57.975 02:26:06 spdkcli_iscsi -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:57.975 02:26:06 spdkcli_iscsi -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:57.975 02:26:06 spdkcli_iscsi -- common/autotest_common.sh@10 -- # set +x 00:30:58.234 [2024-07-23 02:26:06.777776] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:30:58.234 [2024-07-23 02:26:06.777976] [ DPDK EAL parameters: iscsi --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid124247 ] 00:30:58.234 [2024-07-23 02:26:06.954472] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:30:58.492 [2024-07-23 02:26:07.221354] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:58.492 [2024-07-23 02:26:07.221369] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:59.060 02:26:07 spdkcli_iscsi -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:59.060 02:26:07 spdkcli_iscsi -- common/autotest_common.sh@862 -- # return 0 00:30:59.060 02:26:07 spdkcli_iscsi -- spdkcli/iscsi.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:30:59.996 02:26:08 spdkcli_iscsi -- spdkcli/iscsi.sh@25 -- # timing_exit run_iscsi_tgt 00:30:59.996 02:26:08 spdkcli_iscsi -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:59.997 02:26:08 spdkcli_iscsi -- common/autotest_common.sh@10 -- # set +x 00:30:59.997 02:26:08 spdkcli_iscsi -- spdkcli/iscsi.sh@27 -- # timing_enter spdkcli_create_iscsi_config 00:30:59.997 02:26:08 spdkcli_iscsi -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:59.997 02:26:08 spdkcli_iscsi -- common/autotest_common.sh@10 -- # set +x 00:30:59.997 02:26:08 spdkcli_iscsi -- spdkcli/iscsi.sh@48 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc0'\'' '\''Malloc0'\'' True 00:30:59.997 '\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:30:59.997 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:30:59.997 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:30:59.997 '\''/iscsi/portal_groups create 1 "127.0.0.1:3261 127.0.0.1:3263@0x1"'\'' '\''host=127.0.0.1, port=3261'\'' True 00:30:59.997 '\''/iscsi/portal_groups create 2 127.0.0.1:3262'\'' '\''host=127.0.0.1, port=3262'\'' True 00:30:59.997 '\''/iscsi/initiator_groups create 2 ANY 10.0.2.15/32'\'' '\''hostname=ANY, netmask=10.0.2.15/32'\'' True 00:30:59.997 '\''/iscsi/initiator_groups create 3 ANZ 10.0.2.15/32'\'' '\''hostname=ANZ, netmask=10.0.2.15/32'\'' True 00:30:59.997 '\''/iscsi/initiator_groups add_initiator 2 ANW 10.0.2.16/32'\'' '\''hostname=ANW, netmask=10.0.2.16'\'' True 00:30:59.997 '\''/iscsi/target_nodes create Target0 Target0_alias "Malloc0:0 Malloc1:1" 1:2 64 g=1'\'' '\''Target0'\'' True 00:30:59.997 '\''/iscsi/target_nodes create Target1 Target1_alias Malloc2:0 1:2 64 g=1'\'' '\''Target1'\'' True 00:30:59.997 '\''/iscsi/target_nodes/iqn.2016-06.io.spdk:Target0 iscsi_target_node_add_pg_ig_maps "1:3 2:2"'\'' '\''portal_group1 - initiator_group3'\'' True 00:30:59.997 '\''/iscsi/target_nodes add_lun iqn.2016-06.io.spdk:Target1 Malloc3 2'\'' '\''Malloc3'\'' True 00:30:59.997 '\''/iscsi/auth_groups create 1 "user:test1 secret:test1 muser:mutual_test1 msecret:mutual_test1,user:test3 secret:test3 muser:mutual_test3 msecret:mutual_test3"'\'' '\''user=test3'\'' True 00:30:59.997 '\''/iscsi/auth_groups add_secret 1 user=test2 secret=test2 muser=mutual_test2 msecret=mutual_test2'\'' '\''user=test2'\'' True 00:30:59.997 '\''/iscsi/auth_groups create 2 "user:test4 secret:test4 muser:mutual_test4 msecret:mutual_test4"'\'' '\''user=test4'\'' True 00:30:59.997 '\''/iscsi/target_nodes/iqn.2016-06.io.spdk:Target0 set_auth g=1 d=true'\'' '\''disable_chap: True'\'' True 00:30:59.997 '\''/iscsi/global_params set_auth g=1 d=true r=false'\'' '\''disable_chap: True'\'' True 00:30:59.997 '\''/iscsi ls'\'' '\''Malloc'\'' True 00:30:59.997 ' 00:31:08.112 Executing command: ['/bdevs/malloc create 32 512 Malloc0', 'Malloc0', True] 00:31:08.112 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:31:08.112 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:31:08.112 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:31:08.112 Executing command: ['/iscsi/portal_groups create 1 "127.0.0.1:3261 127.0.0.1:3263@0x1"', 'host=127.0.0.1, port=3261', True] 00:31:08.112 Executing command: ['/iscsi/portal_groups create 2 127.0.0.1:3262', 'host=127.0.0.1, port=3262', True] 00:31:08.112 Executing command: ['/iscsi/initiator_groups create 2 ANY 10.0.2.15/32', 'hostname=ANY, netmask=10.0.2.15/32', True] 00:31:08.112 Executing command: ['/iscsi/initiator_groups create 3 ANZ 10.0.2.15/32', 'hostname=ANZ, netmask=10.0.2.15/32', True] 00:31:08.112 Executing command: ['/iscsi/initiator_groups add_initiator 2 ANW 10.0.2.16/32', 'hostname=ANW, netmask=10.0.2.16', True] 00:31:08.112 Executing command: ['/iscsi/target_nodes create Target0 Target0_alias "Malloc0:0 Malloc1:1" 1:2 64 g=1', 'Target0', True] 00:31:08.112 Executing command: ['/iscsi/target_nodes create Target1 Target1_alias Malloc2:0 1:2 64 g=1', 'Target1', True] 00:31:08.112 Executing command: ['/iscsi/target_nodes/iqn.2016-06.io.spdk:Target0 iscsi_target_node_add_pg_ig_maps "1:3 2:2"', 'portal_group1 - initiator_group3', True] 00:31:08.112 Executing command: ['/iscsi/target_nodes add_lun iqn.2016-06.io.spdk:Target1 Malloc3 2', 'Malloc3', True] 00:31:08.112 Executing command: ['/iscsi/auth_groups create 1 "user:test1 secret:test1 muser:mutual_test1 msecret:mutual_test1,user:test3 secret:test3 muser:mutual_test3 msecret:mutual_test3"', 'user=test3', True] 00:31:08.112 Executing command: ['/iscsi/auth_groups add_secret 1 user=test2 secret=test2 muser=mutual_test2 msecret=mutual_test2', 'user=test2', True] 00:31:08.112 Executing command: ['/iscsi/auth_groups create 2 "user:test4 secret:test4 muser:mutual_test4 msecret:mutual_test4"', 'user=test4', True] 00:31:08.112 Executing command: ['/iscsi/target_nodes/iqn.2016-06.io.spdk:Target0 set_auth g=1 d=true', 'disable_chap: True', True] 00:31:08.112 Executing command: ['/iscsi/global_params set_auth g=1 d=true r=false', 'disable_chap: True', True] 00:31:08.112 Executing command: ['/iscsi ls', 'Malloc', True] 00:31:08.112 02:26:16 spdkcli_iscsi -- spdkcli/iscsi.sh@49 -- # timing_exit spdkcli_create_iscsi_config 00:31:08.112 02:26:16 spdkcli_iscsi -- common/autotest_common.sh@728 -- # xtrace_disable 00:31:08.112 02:26:16 spdkcli_iscsi -- common/autotest_common.sh@10 -- # set +x 00:31:08.112 02:26:16 spdkcli_iscsi -- spdkcli/iscsi.sh@51 -- # timing_enter spdkcli_check_match 00:31:08.112 02:26:16 spdkcli_iscsi -- common/autotest_common.sh@722 -- # xtrace_disable 00:31:08.112 02:26:16 spdkcli_iscsi -- common/autotest_common.sh@10 -- # set +x 00:31:08.112 02:26:16 spdkcli_iscsi -- spdkcli/iscsi.sh@52 -- # check_match 00:31:08.112 02:26:16 spdkcli_iscsi -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /iscsi 00:31:08.112 02:26:16 spdkcli_iscsi -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_iscsi.test.match 00:31:08.112 02:26:16 spdkcli_iscsi -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_iscsi.test 00:31:08.112 02:26:16 spdkcli_iscsi -- spdkcli/iscsi.sh@53 -- # timing_exit spdkcli_check_match 00:31:08.112 02:26:16 spdkcli_iscsi -- common/autotest_common.sh@728 -- # xtrace_disable 00:31:08.112 02:26:16 spdkcli_iscsi -- common/autotest_common.sh@10 -- # set +x 00:31:08.112 02:26:16 spdkcli_iscsi -- spdkcli/iscsi.sh@55 -- # timing_enter spdkcli_clear_iscsi_config 00:31:08.112 02:26:16 spdkcli_iscsi -- common/autotest_common.sh@722 -- # xtrace_disable 00:31:08.112 02:26:16 spdkcli_iscsi -- common/autotest_common.sh@10 -- # set +x 00:31:08.112 02:26:16 spdkcli_iscsi -- spdkcli/iscsi.sh@72 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/iscsi/auth_groups delete_secret 1 test2'\'' '\''user=test2'\'' 00:31:08.112 '\''/iscsi/auth_groups delete_secret_all 1'\'' '\''user=test1'\'' 00:31:08.112 '\''/iscsi/auth_groups delete 1'\'' '\''user=test1'\'' 00:31:08.112 '\''/iscsi/auth_groups delete_all'\'' '\''user=test4'\'' 00:31:08.112 '\''/iscsi/target_nodes/iqn.2016-06.io.spdk:Target0 iscsi_target_node_remove_pg_ig_maps "1:3 2:2"'\'' '\''portal_group1 - initiator_group3'\'' 00:31:08.112 '\''/iscsi/target_nodes delete iqn.2016-06.io.spdk:Target1'\'' '\''Target1'\'' 00:31:08.112 '\''/iscsi/target_nodes delete_all'\'' '\''Target0'\'' 00:31:08.112 '\''/iscsi/initiator_groups delete_initiator 2 ANW 10.0.2.16/32'\'' '\''ANW'\'' 00:31:08.112 '\''/iscsi/initiator_groups delete 3'\'' '\''ANZ'\'' 00:31:08.112 '\''/iscsi/initiator_groups delete_all'\'' '\''ANY'\'' 00:31:08.112 '\''/iscsi/portal_groups delete 1'\'' '\''127.0.0.1:3261'\'' 00:31:08.112 '\''/iscsi/portal_groups delete_all'\'' '\''127.0.0.1:3262'\'' 00:31:08.112 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:31:08.112 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:31:08.112 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:31:08.112 '\''/bdevs/malloc delete Malloc0'\'' '\''Malloc0'\'' 00:31:08.112 ' 00:31:14.679 Executing command: ['/iscsi/auth_groups delete_secret 1 test2', 'user=test2', False] 00:31:14.679 Executing command: ['/iscsi/auth_groups delete_secret_all 1', 'user=test1', False] 00:31:14.679 Executing command: ['/iscsi/auth_groups delete 1', 'user=test1', False] 00:31:14.679 Executing command: ['/iscsi/auth_groups delete_all', 'user=test4', False] 00:31:14.679 Executing command: ['/iscsi/target_nodes/iqn.2016-06.io.spdk:Target0 iscsi_target_node_remove_pg_ig_maps "1:3 2:2"', 'portal_group1 - initiator_group3', False] 00:31:14.679 Executing command: ['/iscsi/target_nodes delete iqn.2016-06.io.spdk:Target1', 'Target1', False] 00:31:14.679 Executing command: ['/iscsi/target_nodes delete_all', 'Target0', False] 00:31:14.679 Executing command: ['/iscsi/initiator_groups delete_initiator 2 ANW 10.0.2.16/32', 'ANW', False] 00:31:14.679 Executing command: ['/iscsi/initiator_groups delete 3', 'ANZ', False] 00:31:14.679 Executing command: ['/iscsi/initiator_groups delete_all', 'ANY', False] 00:31:14.679 Executing command: ['/iscsi/portal_groups delete 1', '127.0.0.1:3261', False] 00:31:14.679 Executing command: ['/iscsi/portal_groups delete_all', '127.0.0.1:3262', False] 00:31:14.679 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:31:14.679 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:31:14.679 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:31:14.679 Executing command: ['/bdevs/malloc delete Malloc0', 'Malloc0', False] 00:31:14.679 02:26:23 spdkcli_iscsi -- spdkcli/iscsi.sh@73 -- # timing_exit spdkcli_clear_iscsi_config 00:31:14.679 02:26:23 spdkcli_iscsi -- common/autotest_common.sh@728 -- # xtrace_disable 00:31:14.679 02:26:23 spdkcli_iscsi -- common/autotest_common.sh@10 -- # set +x 00:31:14.679 02:26:23 spdkcli_iscsi -- spdkcli/iscsi.sh@75 -- # killprocess 124247 00:31:14.679 02:26:23 spdkcli_iscsi -- common/autotest_common.sh@948 -- # '[' -z 124247 ']' 00:31:14.679 02:26:23 spdkcli_iscsi -- common/autotest_common.sh@952 -- # kill -0 124247 00:31:14.679 02:26:23 spdkcli_iscsi -- common/autotest_common.sh@953 -- # uname 00:31:14.679 02:26:23 spdkcli_iscsi -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:14.679 02:26:23 spdkcli_iscsi -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 124247 00:31:14.679 killing process with pid 124247 00:31:14.679 02:26:23 spdkcli_iscsi -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:31:14.679 02:26:23 spdkcli_iscsi -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:31:14.679 02:26:23 spdkcli_iscsi -- common/autotest_common.sh@966 -- # echo 'killing process with pid 124247' 00:31:14.679 02:26:23 spdkcli_iscsi -- common/autotest_common.sh@967 -- # kill 124247 00:31:14.679 02:26:23 spdkcli_iscsi -- common/autotest_common.sh@972 -- # wait 124247 00:31:16.580 02:26:25 spdkcli_iscsi -- spdkcli/iscsi.sh@1 -- # cleanup 00:31:16.580 02:26:25 spdkcli_iscsi -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:31:16.580 02:26:25 spdkcli_iscsi -- spdkcli/common.sh@13 -- # '[' -n '' ']' 00:31:16.580 02:26:25 spdkcli_iscsi -- spdkcli/common.sh@16 -- # '[' -n 124247 ']' 00:31:16.580 02:26:25 spdkcli_iscsi -- spdkcli/common.sh@17 -- # killprocess 124247 00:31:16.580 02:26:25 spdkcli_iscsi -- common/autotest_common.sh@948 -- # '[' -z 124247 ']' 00:31:16.580 02:26:25 spdkcli_iscsi -- common/autotest_common.sh@952 -- # kill -0 124247 00:31:16.580 Process with pid 124247 is not found 00:31:16.580 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (124247) - No such process 00:31:16.580 02:26:25 spdkcli_iscsi -- common/autotest_common.sh@975 -- # echo 'Process with pid 124247 is not found' 00:31:16.580 02:26:25 spdkcli_iscsi -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:31:16.580 02:26:25 spdkcli_iscsi -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_iscsi.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:31:16.580 ************************************ 00:31:16.580 END TEST spdkcli_iscsi 00:31:16.580 ************************************ 00:31:16.580 00:31:16.580 real 0m18.691s 00:31:16.580 user 0m38.729s 00:31:16.580 sys 0m1.270s 00:31:16.580 02:26:25 spdkcli_iscsi -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:16.580 02:26:25 spdkcli_iscsi -- common/autotest_common.sh@10 -- # set +x 00:31:16.580 02:26:25 -- common/autotest_common.sh@1142 -- # return 0 00:31:16.580 02:26:25 -- spdk/autotest.sh@267 -- # run_test spdkcli_raid /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:31:16.580 02:26:25 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:31:16.580 02:26:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:16.580 02:26:25 -- common/autotest_common.sh@10 -- # set +x 00:31:16.580 ************************************ 00:31:16.580 START TEST spdkcli_raid 00:31:16.580 ************************************ 00:31:16.580 02:26:25 spdkcli_raid -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:31:16.580 * Looking for test storage... 00:31:16.580 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:31:16.580 02:26:25 spdkcli_raid -- spdkcli/raid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:31:16.580 02:26:25 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:31:16.580 02:26:25 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:31:16.580 02:26:25 spdkcli_raid -- spdkcli/raid.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:31:16.580 02:26:25 spdkcli_raid -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:31:16.580 02:26:25 spdkcli_raid -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:31:16.580 02:26:25 spdkcli_raid -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:31:16.580 02:26:25 spdkcli_raid -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:31:16.580 02:26:25 spdkcli_raid -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:31:16.580 02:26:25 spdkcli_raid -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:31:16.580 02:26:25 spdkcli_raid -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:31:16.580 02:26:25 spdkcli_raid -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:31:16.580 02:26:25 spdkcli_raid -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:31:16.580 02:26:25 spdkcli_raid -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:31:16.580 02:26:25 spdkcli_raid -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:31:16.581 02:26:25 spdkcli_raid -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:31:16.581 02:26:25 spdkcli_raid -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:31:16.581 02:26:25 spdkcli_raid -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:31:16.581 02:26:25 spdkcli_raid -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:31:16.581 02:26:25 spdkcli_raid -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:31:16.581 02:26:25 spdkcli_raid -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:31:16.581 02:26:25 spdkcli_raid -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:31:16.581 02:26:25 spdkcli_raid -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:31:16.581 02:26:25 spdkcli_raid -- spdkcli/raid.sh@12 -- # MATCH_FILE=spdkcli_raid.test 00:31:16.581 02:26:25 spdkcli_raid -- spdkcli/raid.sh@13 -- # SPDKCLI_BRANCH=/bdevs 00:31:16.581 02:26:25 spdkcli_raid -- spdkcli/raid.sh@14 -- # dirname /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:31:16.581 02:26:25 spdkcli_raid -- spdkcli/raid.sh@14 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/spdkcli 00:31:16.839 02:26:25 spdkcli_raid -- spdkcli/raid.sh@14 -- # testdir=/home/vagrant/spdk_repo/spdk/test/spdkcli 00:31:16.839 02:26:25 spdkcli_raid -- spdkcli/raid.sh@15 -- # . /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:31:16.839 02:26:25 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:31:16.839 02:26:25 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:31:16.839 02:26:25 spdkcli_raid -- spdkcli/raid.sh@17 -- # trap cleanup EXIT 00:31:16.839 02:26:25 spdkcli_raid -- spdkcli/raid.sh@19 -- # timing_enter run_spdk_tgt 00:31:16.839 02:26:25 spdkcli_raid -- common/autotest_common.sh@722 -- # xtrace_disable 00:31:16.839 02:26:25 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:31:16.839 02:26:25 spdkcli_raid -- spdkcli/raid.sh@20 -- # run_spdk_tgt 00:31:16.839 02:26:25 spdkcli_raid -- spdkcli/common.sh@27 -- # spdk_tgt_pid=124556 00:31:16.839 02:26:25 spdkcli_raid -- spdkcli/common.sh@28 -- # waitforlisten 124556 00:31:16.839 02:26:25 spdkcli_raid -- spdkcli/common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:31:16.839 02:26:25 spdkcli_raid -- common/autotest_common.sh@829 -- # '[' -z 124556 ']' 00:31:16.839 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:16.839 02:26:25 spdkcli_raid -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:16.839 02:26:25 spdkcli_raid -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:16.839 02:26:25 spdkcli_raid -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:16.839 02:26:25 spdkcli_raid -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:16.839 02:26:25 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:31:16.839 [2024-07-23 02:26:25.528407] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:31:16.839 [2024-07-23 02:26:25.529171] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid124556 ] 00:31:17.098 [2024-07-23 02:26:25.701045] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:31:17.358 [2024-07-23 02:26:25.911599] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:17.358 [2024-07-23 02:26:25.911607] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:31:17.928 02:26:26 spdkcli_raid -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:17.928 02:26:26 spdkcli_raid -- common/autotest_common.sh@862 -- # return 0 00:31:17.928 02:26:26 spdkcli_raid -- spdkcli/raid.sh@21 -- # timing_exit run_spdk_tgt 00:31:17.928 02:26:26 spdkcli_raid -- common/autotest_common.sh@728 -- # xtrace_disable 00:31:17.928 02:26:26 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:31:17.928 02:26:26 spdkcli_raid -- spdkcli/raid.sh@23 -- # timing_enter spdkcli_create_malloc 00:31:17.928 02:26:26 spdkcli_raid -- common/autotest_common.sh@722 -- # xtrace_disable 00:31:17.928 02:26:26 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:31:17.928 02:26:26 spdkcli_raid -- spdkcli/raid.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 8 512 Malloc1'\'' '\''Malloc1'\'' True 00:31:17.928 '\''/bdevs/malloc create 8 512 Malloc2'\'' '\''Malloc2'\'' True 00:31:17.928 ' 00:31:19.830 Executing command: ['/bdevs/malloc create 8 512 Malloc1', 'Malloc1', True] 00:31:19.830 Executing command: ['/bdevs/malloc create 8 512 Malloc2', 'Malloc2', True] 00:31:19.830 02:26:28 spdkcli_raid -- spdkcli/raid.sh@27 -- # timing_exit spdkcli_create_malloc 00:31:19.830 02:26:28 spdkcli_raid -- common/autotest_common.sh@728 -- # xtrace_disable 00:31:19.830 02:26:28 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:31:19.830 02:26:28 spdkcli_raid -- spdkcli/raid.sh@29 -- # timing_enter spdkcli_create_raid 00:31:19.830 02:26:28 spdkcli_raid -- common/autotest_common.sh@722 -- # xtrace_disable 00:31:19.830 02:26:28 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:31:19.831 02:26:28 spdkcli_raid -- spdkcli/raid.sh@31 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4'\'' '\''testraid'\'' True 00:31:19.831 ' 00:31:20.767 Executing command: ['/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4', 'testraid', True] 00:31:20.767 02:26:29 spdkcli_raid -- spdkcli/raid.sh@32 -- # timing_exit spdkcli_create_raid 00:31:20.767 02:26:29 spdkcli_raid -- common/autotest_common.sh@728 -- # xtrace_disable 00:31:20.767 02:26:29 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:31:20.767 02:26:29 spdkcli_raid -- spdkcli/raid.sh@34 -- # timing_enter spdkcli_check_match 00:31:20.767 02:26:29 spdkcli_raid -- common/autotest_common.sh@722 -- # xtrace_disable 00:31:20.767 02:26:29 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:31:20.767 02:26:29 spdkcli_raid -- spdkcli/raid.sh@35 -- # check_match 00:31:20.767 02:26:29 spdkcli_raid -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /bdevs 00:31:21.360 02:26:30 spdkcli_raid -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test.match 00:31:21.360 02:26:30 spdkcli_raid -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test 00:31:21.360 02:26:30 spdkcli_raid -- spdkcli/raid.sh@36 -- # timing_exit spdkcli_check_match 00:31:21.360 02:26:30 spdkcli_raid -- common/autotest_common.sh@728 -- # xtrace_disable 00:31:21.361 02:26:30 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:31:21.621 02:26:30 spdkcli_raid -- spdkcli/raid.sh@38 -- # timing_enter spdkcli_delete_raid 00:31:21.621 02:26:30 spdkcli_raid -- common/autotest_common.sh@722 -- # xtrace_disable 00:31:21.621 02:26:30 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:31:21.621 02:26:30 spdkcli_raid -- spdkcli/raid.sh@40 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume delete testraid'\'' '\'''\'' True 00:31:21.621 ' 00:31:22.559 Executing command: ['/bdevs/raid_volume delete testraid', '', True] 00:31:22.559 02:26:31 spdkcli_raid -- spdkcli/raid.sh@41 -- # timing_exit spdkcli_delete_raid 00:31:22.559 02:26:31 spdkcli_raid -- common/autotest_common.sh@728 -- # xtrace_disable 00:31:22.559 02:26:31 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:31:22.559 02:26:31 spdkcli_raid -- spdkcli/raid.sh@43 -- # timing_enter spdkcli_delete_malloc 00:31:22.559 02:26:31 spdkcli_raid -- common/autotest_common.sh@722 -- # xtrace_disable 00:31:22.559 02:26:31 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:31:22.559 02:26:31 spdkcli_raid -- spdkcli/raid.sh@46 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc delete Malloc1'\'' '\'''\'' True 00:31:22.559 '\''/bdevs/malloc delete Malloc2'\'' '\'''\'' True 00:31:22.559 ' 00:31:23.937 Executing command: ['/bdevs/malloc delete Malloc1', '', True] 00:31:23.937 Executing command: ['/bdevs/malloc delete Malloc2', '', True] 00:31:24.196 02:26:32 spdkcli_raid -- spdkcli/raid.sh@47 -- # timing_exit spdkcli_delete_malloc 00:31:24.196 02:26:32 spdkcli_raid -- common/autotest_common.sh@728 -- # xtrace_disable 00:31:24.196 02:26:32 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:31:24.196 02:26:32 spdkcli_raid -- spdkcli/raid.sh@49 -- # killprocess 124556 00:31:24.196 02:26:32 spdkcli_raid -- common/autotest_common.sh@948 -- # '[' -z 124556 ']' 00:31:24.196 02:26:32 spdkcli_raid -- common/autotest_common.sh@952 -- # kill -0 124556 00:31:24.196 02:26:32 spdkcli_raid -- common/autotest_common.sh@953 -- # uname 00:31:24.196 02:26:32 spdkcli_raid -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:24.196 02:26:32 spdkcli_raid -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 124556 00:31:24.196 killing process with pid 124556 00:31:24.196 02:26:32 spdkcli_raid -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:31:24.196 02:26:32 spdkcli_raid -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:31:24.196 02:26:32 spdkcli_raid -- common/autotest_common.sh@966 -- # echo 'killing process with pid 124556' 00:31:24.196 02:26:32 spdkcli_raid -- common/autotest_common.sh@967 -- # kill 124556 00:31:24.196 02:26:32 spdkcli_raid -- common/autotest_common.sh@972 -- # wait 124556 00:31:26.100 02:26:34 spdkcli_raid -- spdkcli/raid.sh@1 -- # cleanup 00:31:26.100 02:26:34 spdkcli_raid -- spdkcli/common.sh@10 -- # '[' -n 124556 ']' 00:31:26.100 02:26:34 spdkcli_raid -- spdkcli/common.sh@11 -- # killprocess 124556 00:31:26.100 02:26:34 spdkcli_raid -- common/autotest_common.sh@948 -- # '[' -z 124556 ']' 00:31:26.100 02:26:34 spdkcli_raid -- common/autotest_common.sh@952 -- # kill -0 124556 00:31:26.100 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (124556) - No such process 00:31:26.100 02:26:34 spdkcli_raid -- common/autotest_common.sh@975 -- # echo 'Process with pid 124556 is not found' 00:31:26.100 Process with pid 124556 is not found 00:31:26.100 02:26:34 spdkcli_raid -- spdkcli/common.sh@13 -- # '[' -n '' ']' 00:31:26.100 02:26:34 spdkcli_raid -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:31:26.100 02:26:34 spdkcli_raid -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:31:26.100 02:26:34 spdkcli_raid -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_raid.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:31:26.100 ************************************ 00:31:26.100 END TEST spdkcli_raid 00:31:26.100 ************************************ 00:31:26.100 00:31:26.100 real 0m9.392s 00:31:26.100 user 0m19.261s 00:31:26.100 sys 0m1.083s 00:31:26.100 02:26:34 spdkcli_raid -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:26.100 02:26:34 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:31:26.100 02:26:34 -- common/autotest_common.sh@1142 -- # return 0 00:31:26.100 02:26:34 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:31:26.100 02:26:34 -- spdk/autotest.sh@279 -- # '[' 0 -eq 1 ']' 00:31:26.100 02:26:34 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:31:26.100 02:26:34 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:31:26.100 02:26:34 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:31:26.100 02:26:34 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:31:26.100 02:26:34 -- spdk/autotest.sh@330 -- # '[' 1 -eq 1 ']' 00:31:26.100 02:26:34 -- spdk/autotest.sh@331 -- # run_test blockdev_rbd /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh rbd 00:31:26.100 02:26:34 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:31:26.100 02:26:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:26.100 02:26:34 -- common/autotest_common.sh@10 -- # set +x 00:31:26.100 ************************************ 00:31:26.100 START TEST blockdev_rbd 00:31:26.100 ************************************ 00:31:26.100 02:26:34 blockdev_rbd -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh rbd 00:31:26.100 * Looking for test storage... 00:31:26.100 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:31:26.100 02:26:34 blockdev_rbd -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:31:26.100 02:26:34 blockdev_rbd -- bdev/nbd_common.sh@6 -- # set -e 00:31:26.100 02:26:34 blockdev_rbd -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:31:26.100 02:26:34 blockdev_rbd -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:31:26.100 02:26:34 blockdev_rbd -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:31:26.100 02:26:34 blockdev_rbd -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:31:26.100 02:26:34 blockdev_rbd -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:31:26.100 02:26:34 blockdev_rbd -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:31:26.100 02:26:34 blockdev_rbd -- bdev/blockdev.sh@20 -- # : 00:31:26.100 02:26:34 blockdev_rbd -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:31:26.100 02:26:34 blockdev_rbd -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:31:26.100 02:26:34 blockdev_rbd -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:31:26.100 02:26:34 blockdev_rbd -- bdev/blockdev.sh@673 -- # uname -s 00:31:26.100 02:26:34 blockdev_rbd -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:31:26.100 02:26:34 blockdev_rbd -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:31:26.100 02:26:34 blockdev_rbd -- bdev/blockdev.sh@681 -- # test_type=rbd 00:31:26.100 02:26:34 blockdev_rbd -- bdev/blockdev.sh@682 -- # crypto_device= 00:31:26.100 02:26:34 blockdev_rbd -- bdev/blockdev.sh@683 -- # dek= 00:31:26.101 02:26:34 blockdev_rbd -- bdev/blockdev.sh@684 -- # env_ctx= 00:31:26.101 02:26:34 blockdev_rbd -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:31:26.101 02:26:34 blockdev_rbd -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:31:26.101 02:26:34 blockdev_rbd -- bdev/blockdev.sh@689 -- # [[ rbd == bdev ]] 00:31:26.101 02:26:34 blockdev_rbd -- bdev/blockdev.sh@689 -- # [[ rbd == crypto_* ]] 00:31:26.101 02:26:34 blockdev_rbd -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:31:26.101 02:26:34 blockdev_rbd -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=124812 00:31:26.101 02:26:34 blockdev_rbd -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:31:26.101 02:26:34 blockdev_rbd -- bdev/blockdev.sh@49 -- # waitforlisten 124812 00:31:26.101 02:26:34 blockdev_rbd -- common/autotest_common.sh@829 -- # '[' -z 124812 ']' 00:31:26.101 02:26:34 blockdev_rbd -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:26.101 02:26:34 blockdev_rbd -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:26.101 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:26.101 02:26:34 blockdev_rbd -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:31:26.101 02:26:34 blockdev_rbd -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:26.101 02:26:34 blockdev_rbd -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:26.101 02:26:34 blockdev_rbd -- common/autotest_common.sh@10 -- # set +x 00:31:26.360 [2024-07-23 02:26:34.929742] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:31:26.360 [2024-07-23 02:26:34.929924] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid124812 ] 00:31:26.360 [2024-07-23 02:26:35.086245] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:26.620 [2024-07-23 02:26:35.294710] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:27.556 02:26:35 blockdev_rbd -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:27.556 02:26:35 blockdev_rbd -- common/autotest_common.sh@862 -- # return 0 00:31:27.556 02:26:35 blockdev_rbd -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:31:27.556 02:26:35 blockdev_rbd -- bdev/blockdev.sh@719 -- # setup_rbd_conf 00:31:27.556 02:26:35 blockdev_rbd -- bdev/blockdev.sh@260 -- # timing_enter rbd_setup 00:31:27.556 02:26:35 blockdev_rbd -- common/autotest_common.sh@722 -- # xtrace_disable 00:31:27.556 02:26:35 blockdev_rbd -- common/autotest_common.sh@10 -- # set +x 00:31:27.556 02:26:35 blockdev_rbd -- bdev/blockdev.sh@261 -- # rbd_setup 127.0.0.1 00:31:27.556 02:26:35 blockdev_rbd -- common/autotest_common.sh@1005 -- # '[' -z 127.0.0.1 ']' 00:31:27.556 02:26:35 blockdev_rbd -- common/autotest_common.sh@1009 -- # '[' -n '' ']' 00:31:27.556 02:26:35 blockdev_rbd -- common/autotest_common.sh@1018 -- # hash ceph 00:31:27.556 02:26:35 blockdev_rbd -- common/autotest_common.sh@1019 -- # export PG_NUM=128 00:31:27.556 02:26:35 blockdev_rbd -- common/autotest_common.sh@1019 -- # PG_NUM=128 00:31:27.556 02:26:35 blockdev_rbd -- common/autotest_common.sh@1020 -- # export RBD_POOL=rbd 00:31:27.556 02:26:35 blockdev_rbd -- common/autotest_common.sh@1020 -- # RBD_POOL=rbd 00:31:27.556 02:26:35 blockdev_rbd -- common/autotest_common.sh@1021 -- # export RBD_NAME=foo 00:31:27.556 02:26:35 blockdev_rbd -- common/autotest_common.sh@1021 -- # RBD_NAME=foo 00:31:27.556 02:26:35 blockdev_rbd -- common/autotest_common.sh@1022 -- # /home/vagrant/spdk_repo/spdk/scripts/ceph/stop.sh 00:31:27.556 + base_dir=/var/tmp/ceph 00:31:27.556 + image=/var/tmp/ceph/ceph_raw.img 00:31:27.556 + dev=/dev/loop200 00:31:27.556 + pkill -9 ceph 00:31:27.556 + sleep 3 00:31:30.841 + umount /dev/loop200p2 00:31:30.841 umount: /dev/loop200p2: no mount point specified. 00:31:30.841 + losetup -d /dev/loop200 00:31:30.841 losetup: /dev/loop200: detach failed: No such device or address 00:31:30.841 + rm -rf /var/tmp/ceph 00:31:30.841 02:26:39 blockdev_rbd -- common/autotest_common.sh@1023 -- # /home/vagrant/spdk_repo/spdk/scripts/ceph/start.sh 127.0.0.1 00:31:30.841 + set -e 00:31:30.841 +++ dirname /home/vagrant/spdk_repo/spdk/scripts/ceph/start.sh 00:31:30.841 ++ readlink -f /home/vagrant/spdk_repo/spdk/scripts/ceph 00:31:30.841 + script_dir=/home/vagrant/spdk_repo/spdk/scripts/ceph 00:31:30.841 + base_dir=/var/tmp/ceph 00:31:30.841 + mon_ip=127.0.0.1 00:31:30.841 + mon_dir=/var/tmp/ceph/mon.a 00:31:30.841 + pid_dir=/var/tmp/ceph/pid 00:31:30.841 + ceph_conf=/var/tmp/ceph/ceph.conf 00:31:30.841 + mnt_dir=/var/tmp/ceph/mnt 00:31:30.841 + image=/var/tmp/ceph_raw.img 00:31:30.841 + dev=/dev/loop200 00:31:30.841 + modprobe loop 00:31:30.841 + umount /dev/loop200p2 00:31:30.841 umount: /dev/loop200p2: no mount point specified. 00:31:30.841 + true 00:31:30.841 + losetup -d /dev/loop200 00:31:30.841 losetup: /dev/loop200: detach failed: No such device or address 00:31:30.841 + true 00:31:30.841 + '[' -d /var/tmp/ceph ']' 00:31:30.841 + mkdir /var/tmp/ceph 00:31:30.841 + cp /home/vagrant/spdk_repo/spdk/scripts/ceph/ceph.conf /var/tmp/ceph/ceph.conf 00:31:30.841 + '[' '!' -e /var/tmp/ceph_raw.img ']' 00:31:30.841 + fallocate -l 4G /var/tmp/ceph_raw.img 00:31:30.841 + mknod /dev/loop200 b 7 200 00:31:30.841 mknod: /dev/loop200: File exists 00:31:30.841 + true 00:31:30.841 + losetup /dev/loop200 /var/tmp/ceph_raw.img 00:31:30.841 + PARTED='parted -s' 00:31:30.841 + SGDISK=sgdisk 00:31:30.841 Partitioning /dev/loop200 00:31:30.841 + echo 'Partitioning /dev/loop200' 00:31:30.841 + parted -s /dev/loop200 mktable gpt 00:31:30.841 + sleep 2 00:31:32.742 + parted -s /dev/loop200 mkpart primary 0% 2GiB 00:31:32.742 + parted -s /dev/loop200 mkpart primary 2GiB 100% 00:31:32.742 Setting name on /dev/loop200 00:31:32.742 + partno=0 00:31:32.742 + echo 'Setting name on /dev/loop200' 00:31:32.742 + sgdisk -c 1:osd-device-0-journal /dev/loop200 00:31:33.677 Warning: The kernel is still using the old partition table. 00:31:33.677 The new table will be used at the next reboot or after you 00:31:33.677 run partprobe(8) or kpartx(8) 00:31:33.677 The operation has completed successfully. 00:31:33.677 + sgdisk -c 2:osd-device-0-data /dev/loop200 00:31:34.615 Warning: The kernel is still using the old partition table. 00:31:34.615 The new table will be used at the next reboot or after you 00:31:34.615 run partprobe(8) or kpartx(8) 00:31:34.615 The operation has completed successfully. 00:31:34.615 + kpartx /dev/loop200 00:31:34.615 loop200p1 : 0 4192256 /dev/loop200 2048 00:31:34.615 loop200p2 : 0 4192256 /dev/loop200 4194304 00:31:34.615 ++ ceph -v 00:31:34.615 ++ awk '{print $3}' 00:31:34.874 + ceph_version=17.2.7 00:31:34.874 + ceph_maj=17 00:31:34.874 + '[' 17 -gt 12 ']' 00:31:34.874 + update_config=true 00:31:34.874 + rm -f /var/log/ceph/ceph-mon.a.log 00:31:34.874 + set_min_mon_release='--set-min-mon-release 14' 00:31:34.874 + ceph_osd_extra_config='--check-needs-journal --no-mon-config' 00:31:34.874 + mnt_pt=/var/tmp/ceph/mnt/osd-device-0-data 00:31:34.874 + mkdir -p /var/tmp/ceph/mnt/osd-device-0-data 00:31:34.874 + mkfs.xfs -f /dev/disk/by-partlabel/osd-device-0-data 00:31:34.874 meta-data=/dev/disk/by-partlabel/osd-device-0-data isize=512 agcount=4, agsize=131008 blks 00:31:34.874 = sectsz=512 attr=2, projid32bit=1 00:31:34.874 = crc=1 finobt=1, sparse=1, rmapbt=0 00:31:34.874 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:31:34.874 data = bsize=4096 blocks=524032, imaxpct=25 00:31:34.874 = sunit=0 swidth=0 blks 00:31:34.874 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:31:34.874 log =internal log bsize=4096 blocks=16384, version=2 00:31:34.874 = sectsz=512 sunit=0 blks, lazy-count=1 00:31:34.874 realtime =none extsz=4096 blocks=0, rtextents=0 00:31:34.874 Discarding blocks...Done. 00:31:34.874 + mount /dev/disk/by-partlabel/osd-device-0-data /var/tmp/ceph/mnt/osd-device-0-data 00:31:34.874 + cat 00:31:34.874 + rm -rf '/var/tmp/ceph/mon.a/*' 00:31:34.874 + mkdir -p /var/tmp/ceph/mon.a 00:31:34.874 + mkdir -p /var/tmp/ceph/pid 00:31:34.874 + rm -f /etc/ceph/ceph.client.admin.keyring 00:31:34.874 + ceph-authtool --create-keyring --gen-key --name=mon. /var/tmp/ceph/keyring --cap mon 'allow *' 00:31:34.874 creating /var/tmp/ceph/keyring 00:31:34.874 + ceph-authtool --gen-key --name=client.admin --cap mon 'allow *' --cap osd 'allow *' --cap mds 'allow *' --cap mgr 'allow *' /var/tmp/ceph/keyring 00:31:34.874 + monmaptool --create --clobber --add a 127.0.0.1:12046 --print /var/tmp/ceph/monmap --set-min-mon-release 14 00:31:34.874 monmaptool: monmap file /var/tmp/ceph/monmap 00:31:34.874 monmaptool: generated fsid d632eb07-d2b7-43d4-acc9-7d0e06cef6c8 00:31:34.874 setting min_mon_release = octopus 00:31:34.874 epoch 0 00:31:34.874 fsid d632eb07-d2b7-43d4-acc9-7d0e06cef6c8 00:31:34.874 last_changed 2024-07-23T02:26:43.532024+0000 00:31:34.874 created 2024-07-23T02:26:43.532024+0000 00:31:34.874 min_mon_release 15 (octopus) 00:31:34.874 election_strategy: 1 00:31:34.874 0: v2:127.0.0.1:12046/0 mon.a 00:31:34.874 monmaptool: writing epoch 0 to /var/tmp/ceph/monmap (1 monitors) 00:31:34.874 + sh -c 'ulimit -c unlimited && exec ceph-mon --mkfs -c /var/tmp/ceph/ceph.conf -i a --monmap=/var/tmp/ceph/monmap --keyring=/var/tmp/ceph/keyring --mon-data=/var/tmp/ceph/mon.a' 00:31:34.874 + '[' true = true ']' 00:31:34.874 + sed -i 's/mon addr = /mon addr = v2:/g' /var/tmp/ceph/ceph.conf 00:31:34.874 + cp /var/tmp/ceph/keyring /var/tmp/ceph/mon.a/keyring 00:31:34.874 + cp /var/tmp/ceph/ceph.conf /etc/ceph/ceph.conf 00:31:34.874 + cp /var/tmp/ceph/keyring /etc/ceph/keyring 00:31:34.874 + cp /var/tmp/ceph/keyring /etc/ceph/ceph.client.admin.keyring 00:31:34.874 + chmod a+r /etc/ceph/ceph.client.admin.keyring 00:31:34.874 ++ hostname 00:31:34.874 + ceph-run sh -c 'ulimit -n 16384 && ulimit -c unlimited && exec ceph-mon -c /var/tmp/ceph/ceph.conf -i a --keyring=/var/tmp/ceph/keyring --pid-file=/var/tmp/ceph/pid/root@fedora38-cloud-1716830599-074-updated-1705279005.pid --mon-data=/var/tmp/ceph/mon.a' 00:31:35.132 + true 00:31:35.132 + '[' true = true ']' 00:31:35.132 + ceph-conf --name mon.a --show-config-value log_file 00:31:35.132 /var/log/ceph/ceph-mon.a.log 00:31:35.132 ++ ceph -s 00:31:35.132 ++ grep id 00:31:35.132 ++ awk '{print $2}' 00:31:35.391 + fsid=d632eb07-d2b7-43d4-acc9-7d0e06cef6c8 00:31:35.391 + sed -i 's/perf = true/perf = true\n\tfsid = d632eb07-d2b7-43d4-acc9-7d0e06cef6c8 \n/g' /var/tmp/ceph/ceph.conf 00:31:35.391 + (( ceph_maj < 18 )) 00:31:35.391 + sed -i 's/perf = true/perf = true\n\tosd objectstore = filestore\n/g' /var/tmp/ceph/ceph.conf 00:31:35.391 + cat /var/tmp/ceph/ceph.conf 00:31:35.391 [global] 00:31:35.391 debug_lockdep = 0/0 00:31:35.391 debug_context = 0/0 00:31:35.391 debug_crush = 0/0 00:31:35.391 debug_buffer = 0/0 00:31:35.391 debug_timer = 0/0 00:31:35.391 debug_filer = 0/0 00:31:35.391 debug_objecter = 0/0 00:31:35.391 debug_rados = 0/0 00:31:35.391 debug_rbd = 0/0 00:31:35.391 debug_ms = 0/0 00:31:35.391 debug_monc = 0/0 00:31:35.391 debug_tp = 0/0 00:31:35.391 debug_auth = 0/0 00:31:35.391 debug_finisher = 0/0 00:31:35.391 debug_heartbeatmap = 0/0 00:31:35.391 debug_perfcounter = 0/0 00:31:35.391 debug_asok = 0/0 00:31:35.391 debug_throttle = 0/0 00:31:35.391 debug_mon = 0/0 00:31:35.391 debug_paxos = 0/0 00:31:35.391 debug_rgw = 0/0 00:31:35.391 00:31:35.391 perf = true 00:31:35.391 osd objectstore = filestore 00:31:35.391 00:31:35.391 fsid = d632eb07-d2b7-43d4-acc9-7d0e06cef6c8 00:31:35.391 00:31:35.391 mutex_perf_counter = false 00:31:35.391 throttler_perf_counter = false 00:31:35.391 rbd cache = false 00:31:35.391 mon_allow_pool_delete = true 00:31:35.391 00:31:35.391 osd_pool_default_size = 1 00:31:35.391 00:31:35.391 [mon] 00:31:35.391 mon_max_pool_pg_num=166496 00:31:35.391 mon_osd_max_split_count = 10000 00:31:35.391 mon_pg_warn_max_per_osd = 10000 00:31:35.391 00:31:35.391 [osd] 00:31:35.391 osd_op_threads = 64 00:31:35.391 filestore_queue_max_ops=5000 00:31:35.391 filestore_queue_committing_max_ops=5000 00:31:35.391 journal_max_write_entries=1000 00:31:35.391 journal_queue_max_ops=3000 00:31:35.391 objecter_inflight_ops=102400 00:31:35.391 filestore_wbthrottle_enable=false 00:31:35.391 filestore_queue_max_bytes=1048576000 00:31:35.391 filestore_queue_committing_max_bytes=1048576000 00:31:35.391 journal_max_write_bytes=1048576000 00:31:35.391 journal_queue_max_bytes=1048576000 00:31:35.391 ms_dispatch_throttle_bytes=1048576000 00:31:35.391 objecter_inflight_op_bytes=1048576000 00:31:35.391 filestore_max_sync_interval=10 00:31:35.391 osd_client_message_size_cap = 0 00:31:35.391 osd_client_message_cap = 0 00:31:35.391 osd_enable_op_tracker = false 00:31:35.391 filestore_fd_cache_size = 10240 00:31:35.391 filestore_fd_cache_shards = 64 00:31:35.391 filestore_op_threads = 16 00:31:35.391 osd_op_num_shards = 48 00:31:35.391 osd_op_num_threads_per_shard = 2 00:31:35.391 osd_pg_object_context_cache_count = 10240 00:31:35.391 filestore_odsync_write = True 00:31:35.391 journal_dynamic_throttle = True 00:31:35.391 00:31:35.391 [osd.0] 00:31:35.391 osd data = /var/tmp/ceph/mnt/osd-device-0-data 00:31:35.391 osd journal = /dev/disk/by-partlabel/osd-device-0-journal 00:31:35.391 00:31:35.391 # add mon address 00:31:35.391 [mon.a] 00:31:35.391 mon addr = v2:127.0.0.1:12046 00:31:35.391 + i=0 00:31:35.391 + mkdir -p /var/tmp/ceph/mnt 00:31:35.391 ++ uuidgen 00:31:35.391 + uuid=ce199a69-8677-46c2-b5c5-a570a4f40b6e 00:31:35.391 + ceph -c /var/tmp/ceph/ceph.conf osd create ce199a69-8677-46c2-b5c5-a570a4f40b6e 0 00:31:35.649 0 00:31:35.649 + ceph-osd -c /var/tmp/ceph/ceph.conf -i 0 --mkfs --mkkey --osd-uuid ce199a69-8677-46c2-b5c5-a570a4f40b6e --check-needs-journal --no-mon-config 00:31:35.649 2024-07-23T02:26:44.354+0000 7f8765256400 -1 auth: error reading file: /var/tmp/ceph/mnt/osd-device-0-data/keyring: can't open /var/tmp/ceph/mnt/osd-device-0-data/keyring: (2) No such file or directory 00:31:35.649 2024-07-23T02:26:44.354+0000 7f8765256400 -1 created new key in keyring /var/tmp/ceph/mnt/osd-device-0-data/keyring 00:31:35.649 2024-07-23T02:26:44.417+0000 7f8765256400 -1 journal check: ondisk fsid 00000000-0000-0000-0000-000000000000 doesn't match expected ce199a69-8677-46c2-b5c5-a570a4f40b6e, invalid (someone else's?) journal 00:31:35.907 2024-07-23T02:26:44.459+0000 7f8765256400 -1 journal do_read_entry(4096): bad header magic 00:31:35.907 2024-07-23T02:26:44.459+0000 7f8765256400 -1 journal do_read_entry(4096): bad header magic 00:31:35.907 ++ hostname 00:31:35.907 + ceph -c /var/tmp/ceph/ceph.conf osd crush add osd.0 1.0 host=fedora38-cloud-1716830599-074-updated-1705279005 root=default 00:31:37.281 add item id 0 name 'osd.0' weight 1 at location {host=fedora38-cloud-1716830599-074-updated-1705279005,root=default} to crush map 00:31:37.281 + ceph -c /var/tmp/ceph/ceph.conf -i /var/tmp/ceph/mnt/osd-device-0-data/keyring auth add osd.0 osd 'allow *' mon 'allow profile osd' mgr 'allow *' 00:31:37.540 added key for osd.0 00:31:37.540 ++ ceph -c /var/tmp/ceph/ceph.conf config get osd osd_class_dir 00:31:37.798 + class_dir=/lib64/rados-classes 00:31:37.798 + [[ -e /lib64/rados-classes ]] 00:31:37.798 + ceph -c /var/tmp/ceph/ceph.conf config set osd osd_class_dir /lib64/rados-classes 00:31:38.056 + pkill -9 ceph-osd 00:31:38.056 + true 00:31:38.056 + sleep 2 00:31:39.974 + mkdir -p /var/tmp/ceph/pid 00:31:39.974 + env -i TCMALLOC_MAX_TOTAL_THREAD_CACHE_BYTES=134217728 ceph-osd -c /var/tmp/ceph/ceph.conf -i 0 --pid-file=/var/tmp/ceph/pid/ceph-osd.0.pid 00:31:40.274 2024-07-23T02:26:48.744+0000 7f089fa2e400 -1 Falling back to public interface 00:31:40.274 2024-07-23T02:26:48.796+0000 7f089fa2e400 -1 journal do_read_entry(8192): bad header magic 00:31:40.274 2024-07-23T02:26:48.796+0000 7f089fa2e400 -1 journal do_read_entry(8192): bad header magic 00:31:40.274 2024-07-23T02:26:48.806+0000 7f089fa2e400 -1 osd.0 0 log_to_monitors true 00:31:41.211 02:26:49 blockdev_rbd -- common/autotest_common.sh@1025 -- # ceph osd pool create rbd 128 00:31:42.148 pool 'rbd' created 00:31:42.148 02:26:50 blockdev_rbd -- common/autotest_common.sh@1026 -- # rbd create foo --size 1000 00:31:47.421 02:26:55 blockdev_rbd -- bdev/blockdev.sh@262 -- # timing_exit rbd_setup 00:31:47.421 02:26:55 blockdev_rbd -- common/autotest_common.sh@728 -- # xtrace_disable 00:31:47.421 02:26:55 blockdev_rbd -- common/autotest_common.sh@10 -- # set +x 00:31:47.421 02:26:55 blockdev_rbd -- bdev/blockdev.sh@264 -- # rpc_cmd bdev_rbd_create -b Ceph0 rbd foo 512 00:31:47.421 02:26:55 blockdev_rbd -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:47.421 02:26:55 blockdev_rbd -- common/autotest_common.sh@10 -- # set +x 00:31:47.421 [2024-07-23 02:26:56.022856] bdev_rbd.c:1359:bdev_rbd_create: *NOTICE*: Add Ceph0 rbd disk to lun 00:31:47.421 WARNING:bdev_rbd_create should be used with specifying -c to have a cluster name after bdev_rbd_register_cluster. 00:31:47.421 Ceph0 00:31:47.421 02:26:56 blockdev_rbd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:47.421 02:26:56 blockdev_rbd -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:31:47.421 02:26:56 blockdev_rbd -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:47.421 02:26:56 blockdev_rbd -- common/autotest_common.sh@10 -- # set +x 00:31:47.421 02:26:56 blockdev_rbd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:47.421 02:26:56 blockdev_rbd -- bdev/blockdev.sh@739 -- # cat 00:31:47.421 02:26:56 blockdev_rbd -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:31:47.421 02:26:56 blockdev_rbd -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:47.421 02:26:56 blockdev_rbd -- common/autotest_common.sh@10 -- # set +x 00:31:47.421 02:26:56 blockdev_rbd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:47.421 02:26:56 blockdev_rbd -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:31:47.421 02:26:56 blockdev_rbd -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:47.421 02:26:56 blockdev_rbd -- common/autotest_common.sh@10 -- # set +x 00:31:47.421 02:26:56 blockdev_rbd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:47.421 02:26:56 blockdev_rbd -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:31:47.421 02:26:56 blockdev_rbd -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:47.421 02:26:56 blockdev_rbd -- common/autotest_common.sh@10 -- # set +x 00:31:47.421 02:26:56 blockdev_rbd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:47.421 02:26:56 blockdev_rbd -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:31:47.421 02:26:56 blockdev_rbd -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:31:47.421 02:26:56 blockdev_rbd -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:47.421 02:26:56 blockdev_rbd -- common/autotest_common.sh@10 -- # set +x 00:31:47.421 02:26:56 blockdev_rbd -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:31:47.421 02:26:56 blockdev_rbd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:47.421 02:26:56 blockdev_rbd -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:31:47.421 02:26:56 blockdev_rbd -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Ceph0",' ' "aliases": [' ' "8d72cde7-ce56-4106-bb1e-39c995b5810d"' ' ],' ' "product_name": "Ceph Rbd Disk",' ' "block_size": 512,' ' "num_blocks": 2048000,' ' "uuid": "8d72cde7-ce56-4106-bb1e-39c995b5810d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": true,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "rbd": {' ' "pool_name": "rbd",' ' "rbd_name": "foo"' ' }' ' }' '}' 00:31:47.421 02:26:56 blockdev_rbd -- bdev/blockdev.sh@748 -- # jq -r .name 00:31:47.421 02:26:56 blockdev_rbd -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:31:47.421 02:26:56 blockdev_rbd -- bdev/blockdev.sh@751 -- # hello_world_bdev=Ceph0 00:31:47.421 02:26:56 blockdev_rbd -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:31:47.421 02:26:56 blockdev_rbd -- bdev/blockdev.sh@753 -- # killprocess 124812 00:31:47.421 02:26:56 blockdev_rbd -- common/autotest_common.sh@948 -- # '[' -z 124812 ']' 00:31:47.421 02:26:56 blockdev_rbd -- common/autotest_common.sh@952 -- # kill -0 124812 00:31:47.680 02:26:56 blockdev_rbd -- common/autotest_common.sh@953 -- # uname 00:31:47.680 02:26:56 blockdev_rbd -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:47.680 02:26:56 blockdev_rbd -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 124812 00:31:47.680 02:26:56 blockdev_rbd -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:31:47.680 02:26:56 blockdev_rbd -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:31:47.680 killing process with pid 124812 00:31:47.680 02:26:56 blockdev_rbd -- common/autotest_common.sh@966 -- # echo 'killing process with pid 124812' 00:31:47.680 02:26:56 blockdev_rbd -- common/autotest_common.sh@967 -- # kill 124812 00:31:47.680 02:26:56 blockdev_rbd -- common/autotest_common.sh@972 -- # wait 124812 00:31:49.586 02:26:58 blockdev_rbd -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:31:49.586 02:26:58 blockdev_rbd -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Ceph0 '' 00:31:49.586 02:26:58 blockdev_rbd -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:31:49.586 02:26:58 blockdev_rbd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:49.586 02:26:58 blockdev_rbd -- common/autotest_common.sh@10 -- # set +x 00:31:49.586 ************************************ 00:31:49.586 START TEST bdev_hello_world 00:31:49.586 ************************************ 00:31:49.586 02:26:58 blockdev_rbd.bdev_hello_world -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Ceph0 '' 00:31:49.586 [2024-07-23 02:26:58.227935] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:31:49.587 [2024-07-23 02:26:58.228105] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid125699 ] 00:31:49.846 [2024-07-23 02:26:58.384170] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:49.846 [2024-07-23 02:26:58.580944] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:50.414 [2024-07-23 02:26:58.973466] bdev_rbd.c:1359:bdev_rbd_create: *NOTICE*: Add Ceph0 rbd disk to lun 00:31:50.414 [2024-07-23 02:26:58.985853] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:31:50.414 [2024-07-23 02:26:58.985907] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Ceph0 00:31:50.414 [2024-07-23 02:26:58.985943] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:31:50.414 [2024-07-23 02:26:58.988649] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:31:50.414 [2024-07-23 02:26:59.007433] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:31:50.414 [2024-07-23 02:26:59.007534] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:31:50.414 [2024-07-23 02:26:59.013718] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:31:50.414 00:31:50.414 [2024-07-23 02:26:59.013779] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:31:51.351 00:31:51.351 real 0m1.881s 00:31:51.351 user 0m1.485s 00:31:51.351 sys 0m0.278s 00:31:51.351 02:27:00 blockdev_rbd.bdev_hello_world -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:51.351 02:27:00 blockdev_rbd.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:31:51.351 ************************************ 00:31:51.351 END TEST bdev_hello_world 00:31:51.351 ************************************ 00:31:51.351 02:27:00 blockdev_rbd -- common/autotest_common.sh@1142 -- # return 0 00:31:51.351 02:27:00 blockdev_rbd -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:31:51.351 02:27:00 blockdev_rbd -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:31:51.351 02:27:00 blockdev_rbd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:51.351 02:27:00 blockdev_rbd -- common/autotest_common.sh@10 -- # set +x 00:31:51.351 ************************************ 00:31:51.351 START TEST bdev_bounds 00:31:51.351 ************************************ 00:31:51.351 02:27:00 blockdev_rbd.bdev_bounds -- common/autotest_common.sh@1123 -- # bdev_bounds '' 00:31:51.351 02:27:00 blockdev_rbd.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=125755 00:31:51.351 02:27:00 blockdev_rbd.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:31:51.351 02:27:00 blockdev_rbd.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:31:51.351 Process bdevio pid: 125755 00:31:51.351 02:27:00 blockdev_rbd.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 125755' 00:31:51.351 02:27:00 blockdev_rbd.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 125755 00:31:51.351 02:27:00 blockdev_rbd.bdev_bounds -- common/autotest_common.sh@829 -- # '[' -z 125755 ']' 00:31:51.351 02:27:00 blockdev_rbd.bdev_bounds -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:51.351 02:27:00 blockdev_rbd.bdev_bounds -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:51.351 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:51.351 02:27:00 blockdev_rbd.bdev_bounds -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:51.351 02:27:00 blockdev_rbd.bdev_bounds -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:51.351 02:27:00 blockdev_rbd.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:31:51.611 [2024-07-23 02:27:00.182269] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:31:51.611 [2024-07-23 02:27:00.182460] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid125755 ] 00:31:51.611 [2024-07-23 02:27:00.334449] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:31:51.870 [2024-07-23 02:27:00.533212] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:31:51.870 [2024-07-23 02:27:00.533394] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:51.870 [2024-07-23 02:27:00.533408] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:31:52.438 [2024-07-23 02:27:00.939333] bdev_rbd.c:1359:bdev_rbd_create: *NOTICE*: Add Ceph0 rbd disk to lun 00:31:52.438 02:27:01 blockdev_rbd.bdev_bounds -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:52.438 02:27:01 blockdev_rbd.bdev_bounds -- common/autotest_common.sh@862 -- # return 0 00:31:52.438 02:27:01 blockdev_rbd.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:31:52.438 I/O targets: 00:31:52.438 Ceph0: 2048000 blocks of 512 bytes (1000 MiB) 00:31:52.438 00:31:52.438 00:31:52.438 CUnit - A unit testing framework for C - Version 2.1-3 00:31:52.438 http://cunit.sourceforge.net/ 00:31:52.438 00:31:52.438 00:31:52.438 Suite: bdevio tests on: Ceph0 00:31:52.438 Test: blockdev write read block ...passed 00:31:52.697 Test: blockdev write zeroes read block ...passed 00:31:52.697 Test: blockdev write zeroes read no split ...passed 00:31:52.697 Test: blockdev write zeroes read split ...passed 00:31:52.697 Test: blockdev write zeroes read split partial ...passed 00:31:52.697 Test: blockdev reset ...passed 00:31:52.697 Test: blockdev write read 8 blocks ...passed 00:31:52.697 Test: blockdev write read size > 128k ...passed 00:31:52.697 Test: blockdev write read invalid size ...passed 00:31:52.697 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:31:52.697 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:31:52.697 Test: blockdev write read max offset ...passed 00:31:52.697 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:31:52.697 Test: blockdev writev readv 8 blocks ...passed 00:31:52.697 Test: blockdev writev readv 30 x 1block ...passed 00:31:52.697 Test: blockdev writev readv block ...passed 00:31:52.697 Test: blockdev writev readv size > 128k ...passed 00:31:52.697 Test: blockdev writev readv size > 128k in two iovs ...passed 00:31:52.697 Test: blockdev comparev and writev ...passed 00:31:52.697 Test: blockdev nvme passthru rw ...passed 00:31:52.697 Test: blockdev nvme passthru vendor specific ...passed 00:31:52.697 Test: blockdev nvme admin passthru ...passed 00:31:52.697 Test: blockdev copy ...passed 00:31:52.697 00:31:52.697 Run Summary: Type Total Ran Passed Failed Inactive 00:31:52.697 suites 1 1 n/a 0 0 00:31:52.697 tests 23 23 23 0 0 00:31:52.697 asserts 130 130 130 0 n/a 00:31:52.697 00:31:52.697 Elapsed time = 0.499 seconds 00:31:52.697 0 00:31:52.697 02:27:01 blockdev_rbd.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 125755 00:31:52.697 02:27:01 blockdev_rbd.bdev_bounds -- common/autotest_common.sh@948 -- # '[' -z 125755 ']' 00:31:52.697 02:27:01 blockdev_rbd.bdev_bounds -- common/autotest_common.sh@952 -- # kill -0 125755 00:31:52.697 02:27:01 blockdev_rbd.bdev_bounds -- common/autotest_common.sh@953 -- # uname 00:31:52.697 02:27:01 blockdev_rbd.bdev_bounds -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:52.697 02:27:01 blockdev_rbd.bdev_bounds -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 125755 00:31:52.697 02:27:01 blockdev_rbd.bdev_bounds -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:31:52.697 02:27:01 blockdev_rbd.bdev_bounds -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:31:52.697 killing process with pid 125755 00:31:52.697 02:27:01 blockdev_rbd.bdev_bounds -- common/autotest_common.sh@966 -- # echo 'killing process with pid 125755' 00:31:52.697 02:27:01 blockdev_rbd.bdev_bounds -- common/autotest_common.sh@967 -- # kill 125755 00:31:52.697 02:27:01 blockdev_rbd.bdev_bounds -- common/autotest_common.sh@972 -- # wait 125755 00:31:54.075 ************************************ 00:31:54.075 END TEST bdev_bounds 00:31:54.075 ************************************ 00:31:54.075 02:27:02 blockdev_rbd.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:31:54.075 00:31:54.075 real 0m2.406s 00:31:54.075 user 0m5.461s 00:31:54.075 sys 0m0.451s 00:31:54.075 02:27:02 blockdev_rbd.bdev_bounds -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:54.075 02:27:02 blockdev_rbd.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:31:54.075 02:27:02 blockdev_rbd -- common/autotest_common.sh@1142 -- # return 0 00:31:54.075 02:27:02 blockdev_rbd -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json Ceph0 '' 00:31:54.075 02:27:02 blockdev_rbd -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:31:54.075 02:27:02 blockdev_rbd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:54.075 02:27:02 blockdev_rbd -- common/autotest_common.sh@10 -- # set +x 00:31:54.075 ************************************ 00:31:54.075 START TEST bdev_nbd 00:31:54.075 ************************************ 00:31:54.075 02:27:02 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@1123 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json Ceph0 '' 00:31:54.075 02:27:02 blockdev_rbd.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:31:54.075 02:27:02 blockdev_rbd.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:31:54.075 02:27:02 blockdev_rbd.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:54.075 02:27:02 blockdev_rbd.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:31:54.075 02:27:02 blockdev_rbd.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Ceph0') 00:31:54.075 02:27:02 blockdev_rbd.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:31:54.075 02:27:02 blockdev_rbd.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=1 00:31:54.075 02:27:02 blockdev_rbd.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:31:54.075 02:27:02 blockdev_rbd.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:31:54.075 02:27:02 blockdev_rbd.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:31:54.075 02:27:02 blockdev_rbd.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=1 00:31:54.075 02:27:02 blockdev_rbd.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0') 00:31:54.075 02:27:02 blockdev_rbd.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:31:54.075 02:27:02 blockdev_rbd.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Ceph0') 00:31:54.075 02:27:02 blockdev_rbd.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:31:54.075 02:27:02 blockdev_rbd.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=125828 00:31:54.075 02:27:02 blockdev_rbd.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:31:54.075 02:27:02 blockdev_rbd.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 125828 /var/tmp/spdk-nbd.sock 00:31:54.075 02:27:02 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@829 -- # '[' -z 125828 ']' 00:31:54.075 02:27:02 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:31:54.075 02:27:02 blockdev_rbd.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:31:54.075 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:31:54.075 02:27:02 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:54.075 02:27:02 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:31:54.075 02:27:02 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:54.075 02:27:02 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:31:54.075 [2024-07-23 02:27:02.645118] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:31:54.075 [2024-07-23 02:27:02.645276] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:54.075 [2024-07-23 02:27:02.804397] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:54.334 [2024-07-23 02:27:03.002688] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:54.899 [2024-07-23 02:27:03.398614] bdev_rbd.c:1359:bdev_rbd_create: *NOTICE*: Add Ceph0 rbd disk to lun 00:31:54.899 02:27:03 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:54.899 02:27:03 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@862 -- # return 0 00:31:54.899 02:27:03 blockdev_rbd.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock Ceph0 00:31:54.899 02:27:03 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:54.899 02:27:03 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Ceph0') 00:31:54.899 02:27:03 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:31:54.899 02:27:03 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock Ceph0 00:31:54.899 02:27:03 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:54.899 02:27:03 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Ceph0') 00:31:54.899 02:27:03 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:31:54.899 02:27:03 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:31:54.899 02:27:03 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:31:54.899 02:27:03 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:31:54.899 02:27:03 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:31:54.899 02:27:03 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Ceph0 00:31:55.157 02:27:03 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:31:55.157 02:27:03 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:31:55.157 02:27:03 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:31:55.157 02:27:03 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:31:55.157 02:27:03 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:31:55.157 02:27:03 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:31:55.157 02:27:03 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:31:55.157 02:27:03 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:31:55.157 02:27:03 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:31:55.157 02:27:03 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:31:55.157 02:27:03 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:31:55.157 02:27:03 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:31:55.157 1+0 records in 00:31:55.157 1+0 records out 00:31:55.157 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00134421 s, 3.0 MB/s 00:31:55.157 02:27:03 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:55.157 02:27:03 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:31:55.157 02:27:03 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:55.157 02:27:03 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:31:55.157 02:27:03 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:31:55.157 02:27:03 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:31:55.157 02:27:03 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:31:55.157 02:27:03 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:31:55.416 02:27:03 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:31:55.416 { 00:31:55.416 "nbd_device": "/dev/nbd0", 00:31:55.416 "bdev_name": "Ceph0" 00:31:55.416 } 00:31:55.416 ]' 00:31:55.416 02:27:03 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:31:55.416 02:27:03 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:31:55.416 { 00:31:55.416 "nbd_device": "/dev/nbd0", 00:31:55.416 "bdev_name": "Ceph0" 00:31:55.416 } 00:31:55.416 ]' 00:31:55.416 02:27:03 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:31:55.416 02:27:04 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:31:55.416 02:27:04 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:55.416 02:27:04 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:31:55.416 02:27:04 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:31:55.416 02:27:04 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:31:55.416 02:27:04 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:31:55.416 02:27:04 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:31:55.675 02:27:04 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:31:55.675 02:27:04 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:31:55.675 02:27:04 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:31:55.675 02:27:04 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:31:55.675 02:27:04 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:31:55.675 02:27:04 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:31:55.675 02:27:04 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:31:55.675 02:27:04 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:31:55.675 02:27:04 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:31:55.675 02:27:04 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:55.675 02:27:04 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:31:55.933 02:27:04 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:31:55.933 02:27:04 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:31:55.933 02:27:04 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:31:55.933 02:27:04 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:31:55.933 02:27:04 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:31:55.933 02:27:04 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:31:55.933 02:27:04 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:31:55.933 02:27:04 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:31:55.933 02:27:04 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:31:55.933 02:27:04 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:31:55.933 02:27:04 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:31:55.933 02:27:04 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:31:55.933 02:27:04 blockdev_rbd.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock Ceph0 /dev/nbd0 00:31:55.933 02:27:04 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:55.933 02:27:04 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Ceph0') 00:31:55.933 02:27:04 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:31:55.933 02:27:04 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0') 00:31:55.933 02:27:04 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:31:55.933 02:27:04 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock Ceph0 /dev/nbd0 00:31:55.933 02:27:04 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:55.933 02:27:04 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Ceph0') 00:31:55.933 02:27:04 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:31:55.933 02:27:04 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:31:55.933 02:27:04 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:31:55.933 02:27:04 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:31:55.933 02:27:04 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:31:55.933 02:27:04 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:31:55.933 02:27:04 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Ceph0 /dev/nbd0 00:31:56.192 /dev/nbd0 00:31:56.192 02:27:04 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:31:56.192 02:27:04 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:31:56.192 02:27:04 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:31:56.192 02:27:04 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@867 -- # local i 00:31:56.192 02:27:04 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:31:56.192 02:27:04 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:31:56.192 02:27:04 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:31:56.192 02:27:04 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@871 -- # break 00:31:56.192 02:27:04 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:31:56.192 02:27:04 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:31:56.192 02:27:04 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:31:56.192 1+0 records in 00:31:56.192 1+0 records out 00:31:56.192 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00123355 s, 3.3 MB/s 00:31:56.192 02:27:04 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:56.192 02:27:04 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@884 -- # size=4096 00:31:56.192 02:27:04 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:56.192 02:27:04 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:31:56.192 02:27:04 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@887 -- # return 0 00:31:56.192 02:27:04 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:31:56.192 02:27:04 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:31:56.192 02:27:04 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:31:56.192 02:27:04 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:56.192 02:27:04 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:31:56.450 02:27:05 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:31:56.450 { 00:31:56.450 "nbd_device": "/dev/nbd0", 00:31:56.450 "bdev_name": "Ceph0" 00:31:56.450 } 00:31:56.450 ]' 00:31:56.450 02:27:05 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:31:56.450 02:27:05 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:31:56.450 { 00:31:56.450 "nbd_device": "/dev/nbd0", 00:31:56.450 "bdev_name": "Ceph0" 00:31:56.450 } 00:31:56.450 ]' 00:31:56.450 02:27:05 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:31:56.450 02:27:05 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:31:56.450 02:27:05 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:31:56.450 02:27:05 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=1 00:31:56.450 02:27:05 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 1 00:31:56.450 02:27:05 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=1 00:31:56.450 02:27:05 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 1 -ne 1 ']' 00:31:56.450 02:27:05 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify /dev/nbd0 write 00:31:56.450 02:27:05 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:31:56.450 02:27:05 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:31:56.450 02:27:05 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:31:56.450 02:27:05 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:31:56.450 02:27:05 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:31:56.450 02:27:05 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:31:56.450 256+0 records in 00:31:56.450 256+0 records out 00:31:56.450 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0104988 s, 99.9 MB/s 00:31:56.450 02:27:05 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:31:56.450 02:27:05 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:31:57.827 256+0 records in 00:31:57.827 256+0 records out 00:31:57.827 1048576 bytes (1.0 MB, 1.0 MiB) copied, 1.43622 s, 730 kB/s 00:31:57.827 02:27:06 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify /dev/nbd0 verify 00:31:57.827 02:27:06 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:31:57.827 02:27:06 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:31:57.827 02:27:06 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:31:57.827 02:27:06 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:31:57.827 02:27:06 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:31:57.827 02:27:06 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:31:57.827 02:27:06 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:31:57.827 02:27:06 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:31:57.827 02:27:06 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:31:57.827 02:27:06 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:31:57.827 02:27:06 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:57.827 02:27:06 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:31:57.827 02:27:06 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:31:57.827 02:27:06 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:31:57.827 02:27:06 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:31:57.827 02:27:06 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:31:58.086 02:27:06 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:31:58.086 02:27:06 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:31:58.086 02:27:06 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:31:58.086 02:27:06 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:31:58.086 02:27:06 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:31:58.086 02:27:06 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:31:58.344 02:27:06 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:31:58.344 02:27:06 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:31:58.344 02:27:06 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:31:58.344 02:27:06 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:58.344 02:27:06 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:31:58.344 02:27:07 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:31:58.344 02:27:07 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:31:58.344 02:27:07 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:31:58.604 02:27:07 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:31:58.604 02:27:07 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:31:58.604 02:27:07 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:31:58.604 02:27:07 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:31:58.604 02:27:07 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:31:58.604 02:27:07 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:31:58.604 02:27:07 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:31:58.604 02:27:07 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:31:58.604 02:27:07 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:31:58.604 02:27:07 blockdev_rbd.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:31:58.604 02:27:07 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:58.604 02:27:07 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0') 00:31:58.604 02:27:07 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd_list 00:31:58.604 02:27:07 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:31:58.604 02:27:07 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:31:58.878 malloc_lvol_verify 00:31:58.878 02:27:07 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:31:59.151 a6894281-2147-4635-b962-1c9770796d16 00:31:59.151 02:27:07 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:31:59.151 93d00de4-c5fe-4fab-8015-fd08d8d5e47b 00:31:59.410 02:27:07 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:31:59.410 /dev/nbd0 00:31:59.410 02:27:08 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:31:59.410 mke2fs 1.46.5 (30-Dec-2021) 00:31:59.410 Discarding device blocks: 0/4096 done 00:31:59.410 Creating filesystem with 4096 1k blocks and 1024 inodes 00:31:59.410 00:31:59.410 Allocating group tables: 0/1 done 00:31:59.410 Writing inode tables: 0/1 done 00:31:59.410 Creating journal (1024 blocks): done 00:31:59.410 Writing superblocks and filesystem accounting information: 0/1 done 00:31:59.410 00:31:59.410 02:27:08 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:31:59.410 02:27:08 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:31:59.410 02:27:08 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:59.410 02:27:08 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:31:59.410 02:27:08 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:31:59.410 02:27:08 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:31:59.410 02:27:08 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:31:59.410 02:27:08 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:31:59.670 02:27:08 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:31:59.670 02:27:08 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:31:59.670 02:27:08 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:31:59.670 02:27:08 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:31:59.670 02:27:08 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:31:59.670 02:27:08 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:31:59.670 02:27:08 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:31:59.670 02:27:08 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:31:59.670 02:27:08 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:31:59.670 02:27:08 blockdev_rbd.bdev_nbd -- bdev/nbd_common.sh@147 -- # return 0 00:31:59.670 02:27:08 blockdev_rbd.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 125828 00:31:59.670 02:27:08 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@948 -- # '[' -z 125828 ']' 00:31:59.670 02:27:08 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@952 -- # kill -0 125828 00:31:59.670 02:27:08 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@953 -- # uname 00:31:59.670 02:27:08 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:59.670 02:27:08 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 125828 00:31:59.670 02:27:08 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:31:59.670 02:27:08 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:31:59.670 killing process with pid 125828 00:31:59.670 02:27:08 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@966 -- # echo 'killing process with pid 125828' 00:31:59.670 02:27:08 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@967 -- # kill 125828 00:31:59.670 02:27:08 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@972 -- # wait 125828 00:32:01.048 02:27:09 blockdev_rbd.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:32:01.048 00:32:01.048 real 0m6.971s 00:32:01.048 user 0m8.935s 00:32:01.048 sys 0m1.862s 00:32:01.048 02:27:09 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:01.048 ************************************ 00:32:01.048 END TEST bdev_nbd 00:32:01.048 ************************************ 00:32:01.048 02:27:09 blockdev_rbd.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:32:01.048 02:27:09 blockdev_rbd -- common/autotest_common.sh@1142 -- # return 0 00:32:01.048 02:27:09 blockdev_rbd -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:32:01.048 02:27:09 blockdev_rbd -- bdev/blockdev.sh@763 -- # '[' rbd = nvme ']' 00:32:01.048 02:27:09 blockdev_rbd -- bdev/blockdev.sh@763 -- # '[' rbd = gpt ']' 00:32:01.048 02:27:09 blockdev_rbd -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:32:01.048 02:27:09 blockdev_rbd -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:32:01.048 02:27:09 blockdev_rbd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:01.048 02:27:09 blockdev_rbd -- common/autotest_common.sh@10 -- # set +x 00:32:01.048 ************************************ 00:32:01.048 START TEST bdev_fio 00:32:01.048 ************************************ 00:32:01.048 02:27:09 blockdev_rbd.bdev_fio -- common/autotest_common.sh@1123 -- # fio_test_suite '' 00:32:01.048 02:27:09 blockdev_rbd.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:32:01.048 02:27:09 blockdev_rbd.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:32:01.048 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:32:01.048 02:27:09 blockdev_rbd.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:32:01.048 02:27:09 blockdev_rbd.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:32:01.048 02:27:09 blockdev_rbd.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:32:01.049 02:27:09 blockdev_rbd.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:32:01.049 02:27:09 blockdev_rbd.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:32:01.049 02:27:09 blockdev_rbd.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:32:01.049 02:27:09 blockdev_rbd.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=verify 00:32:01.049 02:27:09 blockdev_rbd.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type=AIO 00:32:01.049 02:27:09 blockdev_rbd.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:32:01.049 02:27:09 blockdev_rbd.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:32:01.049 02:27:09 blockdev_rbd.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:32:01.049 02:27:09 blockdev_rbd.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z verify ']' 00:32:01.049 02:27:09 blockdev_rbd.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:32:01.049 02:27:09 blockdev_rbd.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:32:01.049 02:27:09 blockdev_rbd.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:32:01.049 02:27:09 blockdev_rbd.bdev_fio -- common/autotest_common.sh@1313 -- # '[' verify == verify ']' 00:32:01.049 02:27:09 blockdev_rbd.bdev_fio -- common/autotest_common.sh@1314 -- # cat 00:32:01.049 02:27:09 blockdev_rbd.bdev_fio -- common/autotest_common.sh@1323 -- # '[' AIO == AIO ']' 00:32:01.049 02:27:09 blockdev_rbd.bdev_fio -- common/autotest_common.sh@1324 -- # /usr/src/fio/fio --version 00:32:01.049 02:27:09 blockdev_rbd.bdev_fio -- common/autotest_common.sh@1324 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:32:01.049 02:27:09 blockdev_rbd.bdev_fio -- common/autotest_common.sh@1325 -- # echo serialize_overlap=1 00:32:01.049 02:27:09 blockdev_rbd.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:32:01.049 02:27:09 blockdev_rbd.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_Ceph0]' 00:32:01.049 02:27:09 blockdev_rbd.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=Ceph0 00:32:01.049 02:27:09 blockdev_rbd.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:32:01.049 02:27:09 blockdev_rbd.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:32:01.049 02:27:09 blockdev_rbd.bdev_fio -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:32:01.049 02:27:09 blockdev_rbd.bdev_fio -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:01.049 02:27:09 blockdev_rbd.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:32:01.049 ************************************ 00:32:01.049 START TEST bdev_fio_rw_verify 00:32:01.049 ************************************ 00:32:01.049 02:27:09 blockdev_rbd.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1123 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:32:01.049 02:27:09 blockdev_rbd.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:32:01.049 02:27:09 blockdev_rbd.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:32:01.049 02:27:09 blockdev_rbd.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:01.049 02:27:09 blockdev_rbd.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local sanitizers 00:32:01.049 02:27:09 blockdev_rbd.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:32:01.049 02:27:09 blockdev_rbd.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # shift 00:32:01.049 02:27:09 blockdev_rbd.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local asan_lib= 00:32:01.049 02:27:09 blockdev_rbd.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:32:01.049 02:27:09 blockdev_rbd.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:32:01.049 02:27:09 blockdev_rbd.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # grep libasan 00:32:01.049 02:27:09 blockdev_rbd.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:32:01.049 02:27:09 blockdev_rbd.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:32:01.049 02:27:09 blockdev_rbd.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:32:01.049 02:27:09 blockdev_rbd.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # break 00:32:01.049 02:27:09 blockdev_rbd.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:32:01.049 02:27:09 blockdev_rbd.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:32:01.308 job_Ceph0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:32:01.308 fio-3.35 00:32:01.308 Starting 1 thread 00:32:13.518 00:32:13.518 job_Ceph0: (groupid=0, jobs=1): err= 0: pid=126074: Tue Jul 23 02:27:20 2024 00:32:13.518 read: IOPS=511, BW=2046KiB/s (2095kB/s)(20.0MiB/10008msec) 00:32:13.518 slat (usec): min=4, max=546, avg=19.80, stdev=22.58 00:32:13.518 clat (usec): min=535, max=436002, avg=4135.28, stdev=26387.59 00:32:13.518 lat (usec): min=562, max=436009, avg=4155.08, stdev=26387.77 00:32:13.518 clat percentiles (usec): 00:32:13.518 | 50.000th=[ 1500], 99.000th=[ 66323], 99.900th=[429917], 00:32:13.518 | 99.990th=[434111], 99.999th=[434111] 00:32:13.518 write: IOPS=584, BW=2340KiB/s (2396kB/s)(22.9MiB/10008msec); 0 zone resets 00:32:13.518 slat (usec): min=15, max=1102, avg=53.89, stdev=38.55 00:32:13.518 clat (msec): min=2, max=155, avg= 9.96, stdev=18.64 00:32:13.518 lat (msec): min=2, max=155, avg=10.01, stdev=18.64 00:32:13.518 clat percentiles (msec): 00:32:13.518 | 50.000th=[ 6], 99.000th=[ 106], 99.900th=[ 144], 99.990th=[ 155], 00:32:13.518 | 99.999th=[ 155] 00:32:13.518 bw ( KiB/s): min= 432, max= 4816, per=100.00%, avg=2461.18, stdev=1712.91, samples=17 00:32:13.518 iops : min= 108, max= 1204, avg=615.29, stdev=428.23, samples=17 00:32:13.518 lat (usec) : 750=0.26%, 1000=2.01% 00:32:13.518 lat (msec) : 2=40.84%, 4=6.16%, 10=46.46%, 20=1.14%, 50=0.34% 00:32:13.518 lat (msec) : 100=1.61%, 250=0.96%, 500=0.22% 00:32:13.518 cpu : usr=93.56%, sys=4.36%, ctx=622, majf=0, minf=15441 00:32:13.518 IO depths : 1=0.1%, 2=0.1%, 4=13.4%, 8=86.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:13.518 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:13.518 complete : 0=0.0%, 4=99.9%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:13.518 issued rwts: total=5120,5854,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:13.518 latency : target=0, window=0, percentile=100.00%, depth=8 00:32:13.518 00:32:13.518 Run status group 0 (all jobs): 00:32:13.518 READ: bw=2046KiB/s (2095kB/s), 2046KiB/s-2046KiB/s (2095kB/s-2095kB/s), io=20.0MiB (21.0MB), run=10008-10008msec 00:32:13.518 WRITE: bw=2340KiB/s (2396kB/s), 2340KiB/s-2340KiB/s (2396kB/s-2396kB/s), io=22.9MiB (24.0MB), run=10008-10008msec 00:32:13.518 ----------------------------------------------------- 00:32:13.518 Suppressions used: 00:32:13.518 count bytes template 00:32:13.518 1 6 /usr/src/fio/parse.c 00:32:13.518 726 69696 /usr/src/fio/iolog.c 00:32:13.518 1 8 libtcmalloc_minimal.so 00:32:13.518 1 904 libcrypto.so 00:32:13.518 ----------------------------------------------------- 00:32:13.518 00:32:13.518 00:32:13.518 real 0m12.408s 00:32:13.518 user 0m12.710s 00:32:13.518 sys 0m2.310s 00:32:13.518 02:27:22 blockdev_rbd.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:13.518 02:27:22 blockdev_rbd.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:32:13.518 ************************************ 00:32:13.518 END TEST bdev_fio_rw_verify 00:32:13.518 ************************************ 00:32:13.518 02:27:22 blockdev_rbd.bdev_fio -- common/autotest_common.sh@1142 -- # return 0 00:32:13.518 02:27:22 blockdev_rbd.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:32:13.518 02:27:22 blockdev_rbd.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:32:13.518 02:27:22 blockdev_rbd.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:32:13.518 02:27:22 blockdev_rbd.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:32:13.518 02:27:22 blockdev_rbd.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=trim 00:32:13.518 02:27:22 blockdev_rbd.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type= 00:32:13.518 02:27:22 blockdev_rbd.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:32:13.518 02:27:22 blockdev_rbd.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:32:13.518 02:27:22 blockdev_rbd.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:32:13.518 02:27:22 blockdev_rbd.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z trim ']' 00:32:13.518 02:27:22 blockdev_rbd.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:32:13.518 02:27:22 blockdev_rbd.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:32:13.518 02:27:22 blockdev_rbd.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:32:13.518 02:27:22 blockdev_rbd.bdev_fio -- common/autotest_common.sh@1313 -- # '[' trim == verify ']' 00:32:13.518 02:27:22 blockdev_rbd.bdev_fio -- common/autotest_common.sh@1328 -- # '[' trim == trim ']' 00:32:13.518 02:27:22 blockdev_rbd.bdev_fio -- common/autotest_common.sh@1329 -- # echo rw=trimwrite 00:32:13.518 02:27:22 blockdev_rbd.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:32:13.518 02:27:22 blockdev_rbd.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "Ceph0",' ' "aliases": [' ' "8d72cde7-ce56-4106-bb1e-39c995b5810d"' ' ],' ' "product_name": "Ceph Rbd Disk",' ' "block_size": 512,' ' "num_blocks": 2048000,' ' "uuid": "8d72cde7-ce56-4106-bb1e-39c995b5810d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": true,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "rbd": {' ' "pool_name": "rbd",' ' "rbd_name": "foo"' ' }' ' }' '}' 00:32:13.518 02:27:22 blockdev_rbd.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n Ceph0 ]] 00:32:13.518 02:27:22 blockdev_rbd.bdev_fio -- bdev/blockdev.sh@355 -- # printf '%s\n' '{' ' "name": "Ceph0",' ' "aliases": [' ' "8d72cde7-ce56-4106-bb1e-39c995b5810d"' ' ],' ' "product_name": "Ceph Rbd Disk",' ' "block_size": 512,' ' "num_blocks": 2048000,' ' "uuid": "8d72cde7-ce56-4106-bb1e-39c995b5810d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": true,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "rbd": {' ' "pool_name": "rbd",' ' "rbd_name": "foo"' ' }' ' }' '}' 00:32:13.518 02:27:22 blockdev_rbd.bdev_fio -- bdev/blockdev.sh@355 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:32:13.518 02:27:22 blockdev_rbd.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:32:13.518 02:27:22 blockdev_rbd.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_Ceph0]' 00:32:13.518 02:27:22 blockdev_rbd.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=Ceph0 00:32:13.518 02:27:22 blockdev_rbd.bdev_fio -- bdev/blockdev.sh@366 -- # run_test bdev_fio_trim fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:32:13.518 02:27:22 blockdev_rbd.bdev_fio -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:32:13.518 02:27:22 blockdev_rbd.bdev_fio -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:13.518 02:27:22 blockdev_rbd.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:32:13.518 ************************************ 00:32:13.518 START TEST bdev_fio_trim 00:32:13.518 ************************************ 00:32:13.518 02:27:22 blockdev_rbd.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1123 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:32:13.518 02:27:22 blockdev_rbd.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:32:13.518 02:27:22 blockdev_rbd.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:32:13.518 02:27:22 blockdev_rbd.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:13.518 02:27:22 blockdev_rbd.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1339 -- # local sanitizers 00:32:13.518 02:27:22 blockdev_rbd.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:32:13.519 02:27:22 blockdev_rbd.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1341 -- # shift 00:32:13.519 02:27:22 blockdev_rbd.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1343 -- # local asan_lib= 00:32:13.519 02:27:22 blockdev_rbd.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:32:13.519 02:27:22 blockdev_rbd.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # grep libasan 00:32:13.519 02:27:22 blockdev_rbd.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:32:13.519 02:27:22 blockdev_rbd.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:32:13.519 02:27:22 blockdev_rbd.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:32:13.519 02:27:22 blockdev_rbd.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:32:13.519 02:27:22 blockdev_rbd.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1347 -- # break 00:32:13.519 02:27:22 blockdev_rbd.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:32:13.519 02:27:22 blockdev_rbd.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:32:13.778 job_Ceph0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:32:13.778 fio-3.35 00:32:13.778 Starting 1 thread 00:32:25.986 00:32:25.986 job_Ceph0: (groupid=0, jobs=1): err= 0: pid=126260: Tue Jul 23 02:27:33 2024 00:32:25.986 write: IOPS=842, BW=3369KiB/s (3450kB/s)(32.9MiB/10003msec); 0 zone resets 00:32:25.986 slat (usec): min=6, max=648, avg=39.72, stdev=43.81 00:32:25.986 clat (usec): min=2685, max=32402, avg=9250.03, stdev=3077.65 00:32:25.986 lat (usec): min=2718, max=32443, avg=9289.75, stdev=3079.24 00:32:25.986 clat percentiles (usec): 00:32:25.986 | 50.000th=[ 9241], 99.000th=[15533], 99.900th=[25560], 99.990th=[32375], 00:32:25.986 | 99.999th=[32375] 00:32:25.986 bw ( KiB/s): min= 2688, max= 3968, per=100.00%, avg=3407.58, stdev=413.19, samples=19 00:32:25.986 iops : min= 672, max= 992, avg=851.89, stdev=103.30, samples=19 00:32:25.986 trim: IOPS=842, BW=3369KiB/s (3450kB/s)(32.9MiB/10003msec); 0 zone resets 00:32:25.986 slat (usec): min=4, max=980, avg=19.78, stdev=28.49 00:32:25.986 clat (usec): min=4, max=12706, avg=175.10, stdev=293.45 00:32:25.986 lat (usec): min=22, max=12902, avg=194.88, stdev=293.99 00:32:25.986 clat percentiles (usec): 00:32:25.986 | 50.000th=[ 145], 99.000th=[ 519], 99.900th=[ 865], 99.990th=[12649], 00:32:25.986 | 99.999th=[12649] 00:32:25.986 bw ( KiB/s): min= 2688, max= 3968, per=100.00%, avg=3410.95, stdev=416.33, samples=19 00:32:25.986 iops : min= 672, max= 992, avg=852.74, stdev=104.08, samples=19 00:32:25.986 lat (usec) : 10=0.13%, 20=0.67%, 50=5.52%, 100=11.10%, 250=21.31% 00:32:25.986 lat (usec) : 500=10.66%, 750=0.55%, 1000=0.03% 00:32:25.986 lat (msec) : 2=0.01%, 4=1.29%, 10=29.16%, 20=19.38%, 50=0.19% 00:32:25.986 cpu : usr=92.62%, sys=5.24%, ctx=904, majf=0, minf=22330 00:32:25.986 IO depths : 1=0.1%, 2=0.1%, 4=15.8%, 8=84.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:25.986 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:25.986 complete : 0=0.0%, 4=99.9%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:25.986 issued rwts: total=0,8425,8425,0 short=0,0,0,0 dropped=0,0,0,0 00:32:25.986 latency : target=0, window=0, percentile=100.00%, depth=8 00:32:25.986 00:32:25.986 Run status group 0 (all jobs): 00:32:25.986 WRITE: bw=3369KiB/s (3450kB/s), 3369KiB/s-3369KiB/s (3450kB/s-3450kB/s), io=32.9MiB (34.5MB), run=10003-10003msec 00:32:25.986 TRIM: bw=3369KiB/s (3450kB/s), 3369KiB/s-3369KiB/s (3450kB/s-3450kB/s), io=32.9MiB (34.5MB), run=10003-10003msec 00:32:25.986 ----------------------------------------------------- 00:32:25.986 Suppressions used: 00:32:25.986 count bytes template 00:32:25.986 1 6 /usr/src/fio/parse.c 00:32:25.986 1 8 libtcmalloc_minimal.so 00:32:25.986 1 904 libcrypto.so 00:32:25.986 ----------------------------------------------------- 00:32:25.986 00:32:25.986 00:32:25.986 real 0m12.430s 00:32:25.986 user 0m12.546s 00:32:25.986 sys 0m1.778s 00:32:25.986 02:27:34 blockdev_rbd.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:25.986 02:27:34 blockdev_rbd.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@10 -- # set +x 00:32:25.986 ************************************ 00:32:25.986 END TEST bdev_fio_trim 00:32:25.986 ************************************ 00:32:25.986 02:27:34 blockdev_rbd.bdev_fio -- common/autotest_common.sh@1142 -- # return 0 00:32:25.986 02:27:34 blockdev_rbd.bdev_fio -- bdev/blockdev.sh@367 -- # rm -f 00:32:25.986 02:27:34 blockdev_rbd.bdev_fio -- bdev/blockdev.sh@368 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:32:25.986 /home/vagrant/spdk_repo/spdk 00:32:25.986 02:27:34 blockdev_rbd.bdev_fio -- bdev/blockdev.sh@369 -- # popd 00:32:25.986 02:27:34 blockdev_rbd.bdev_fio -- bdev/blockdev.sh@370 -- # trap - SIGINT SIGTERM EXIT 00:32:25.986 00:32:25.986 real 0m25.161s 00:32:25.986 user 0m25.428s 00:32:25.986 sys 0m4.223s 00:32:25.986 02:27:34 blockdev_rbd.bdev_fio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:25.986 ************************************ 00:32:25.986 END TEST bdev_fio 00:32:25.986 02:27:34 blockdev_rbd.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:32:25.986 ************************************ 00:32:25.986 02:27:34 blockdev_rbd -- common/autotest_common.sh@1142 -- # return 0 00:32:25.986 02:27:34 blockdev_rbd -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:32:25.986 02:27:34 blockdev_rbd -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:32:25.986 02:27:34 blockdev_rbd -- common/autotest_common.sh@1099 -- # '[' 16 -le 1 ']' 00:32:25.986 02:27:34 blockdev_rbd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:25.986 02:27:34 blockdev_rbd -- common/autotest_common.sh@10 -- # set +x 00:32:26.246 ************************************ 00:32:26.246 START TEST bdev_verify 00:32:26.246 ************************************ 00:32:26.246 02:27:34 blockdev_rbd.bdev_verify -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:32:26.246 [2024-07-23 02:27:34.921893] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:32:26.246 [2024-07-23 02:27:34.922841] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid126412 ] 00:32:26.505 [2024-07-23 02:27:35.109202] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:32:26.763 [2024-07-23 02:27:35.400933] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:26.763 [2024-07-23 02:27:35.400933] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:32.033 [2024-07-23 02:27:39.775289] bdev_rbd.c:1359:bdev_rbd_create: *NOTICE*: Add Ceph0 rbd disk to lun 00:32:32.033 Running I/O for 5 seconds... 00:32:36.264 00:32:36.264 Latency(us) 00:32:36.264 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:36.264 Job: Ceph0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:32:36.264 Verification LBA range: start 0x0 length 0x1f400 00:32:36.264 Ceph0 : 5.02 1959.35 7.65 0.00 0.00 65104.88 2681.02 770226.73 00:32:36.264 Job: Ceph0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:32:36.264 Verification LBA range: start 0x1f400 length 0x1f400 00:32:36.264 Ceph0 : 5.02 1969.87 7.69 0.00 0.00 64724.46 5034.36 873177.83 00:32:36.264 =================================================================================================================== 00:32:36.264 Total : 3929.22 15.35 0.00 0.00 64914.22 2681.02 873177.83 00:32:37.202 00:32:37.202 real 0m11.184s 00:32:37.202 user 0m18.848s 00:32:37.202 sys 0m2.197s 00:32:37.202 02:27:45 blockdev_rbd.bdev_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:37.202 02:27:45 blockdev_rbd.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:32:37.202 ************************************ 00:32:37.202 END TEST bdev_verify 00:32:37.202 ************************************ 00:32:37.460 02:27:46 blockdev_rbd -- common/autotest_common.sh@1142 -- # return 0 00:32:37.460 02:27:46 blockdev_rbd -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:32:37.460 02:27:46 blockdev_rbd -- common/autotest_common.sh@1099 -- # '[' 16 -le 1 ']' 00:32:37.460 02:27:46 blockdev_rbd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:37.460 02:27:46 blockdev_rbd -- common/autotest_common.sh@10 -- # set +x 00:32:37.460 ************************************ 00:32:37.460 START TEST bdev_verify_big_io 00:32:37.460 ************************************ 00:32:37.460 02:27:46 blockdev_rbd.bdev_verify_big_io -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:32:37.460 [2024-07-23 02:27:46.168308] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:32:37.460 [2024-07-23 02:27:46.168563] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid126559 ] 00:32:37.718 [2024-07-23 02:27:46.344673] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:32:37.976 [2024-07-23 02:27:46.552792] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:37.976 [2024-07-23 02:27:46.552803] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:38.235 [2024-07-23 02:27:46.941171] bdev_rbd.c:1359:bdev_rbd_create: *NOTICE*: Add Ceph0 rbd disk to lun 00:32:38.235 Running I/O for 5 seconds... 00:32:43.504 00:32:43.504 Latency(us) 00:32:43.504 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:43.504 Job: Ceph0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:32:43.504 Verification LBA range: start 0x0 length 0x1f40 00:32:43.504 Ceph0 : 5.19 604.99 37.81 0.00 0.00 207209.45 3247.01 398458.88 00:32:43.504 Job: Ceph0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:32:43.504 Verification LBA range: start 0x1f40 length 0x1f40 00:32:43.504 Ceph0 : 5.12 667.05 41.69 0.00 0.00 187663.65 2502.28 587202.56 00:32:43.504 =================================================================================================================== 00:32:43.504 Total : 1272.04 79.50 0.00 0.00 197020.46 2502.28 587202.56 00:32:44.441 00:32:44.441 real 0m7.193s 00:32:44.441 user 0m13.779s 00:32:44.441 sys 0m1.557s 00:32:44.441 02:27:53 blockdev_rbd.bdev_verify_big_io -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:44.441 02:27:53 blockdev_rbd.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:32:44.441 ************************************ 00:32:44.441 END TEST bdev_verify_big_io 00:32:44.441 ************************************ 00:32:44.700 02:27:53 blockdev_rbd -- common/autotest_common.sh@1142 -- # return 0 00:32:44.700 02:27:53 blockdev_rbd -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:32:44.700 02:27:53 blockdev_rbd -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:32:44.700 02:27:53 blockdev_rbd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:44.700 02:27:53 blockdev_rbd -- common/autotest_common.sh@10 -- # set +x 00:32:44.700 ************************************ 00:32:44.700 START TEST bdev_write_zeroes 00:32:44.700 ************************************ 00:32:44.700 02:27:53 blockdev_rbd.bdev_write_zeroes -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:32:44.700 [2024-07-23 02:27:53.368712] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:32:44.700 [2024-07-23 02:27:53.369718] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid126669 ] 00:32:44.959 [2024-07-23 02:27:53.520065] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:44.959 [2024-07-23 02:27:53.726274] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:45.527 [2024-07-23 02:27:54.111091] bdev_rbd.c:1359:bdev_rbd_create: *NOTICE*: Add Ceph0 rbd disk to lun 00:32:45.527 Running I/O for 1 seconds... 00:32:47.433 00:32:47.433 Latency(us) 00:32:47.433 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:47.433 Job: Ceph0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:32:47.433 Ceph0 : 1.56 3377.32 13.19 0.00 0.00 37816.77 7685.59 579576.55 00:32:47.433 =================================================================================================================== 00:32:47.433 Total : 3377.32 13.19 0.00 0.00 37816.77 7685.59 579576.55 00:32:48.370 00:32:48.370 real 0m3.758s 00:32:48.370 user 0m3.774s 00:32:48.370 sys 0m0.740s 00:32:48.370 02:27:57 blockdev_rbd.bdev_write_zeroes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:48.370 ************************************ 00:32:48.370 END TEST bdev_write_zeroes 00:32:48.370 ************************************ 00:32:48.370 02:27:57 blockdev_rbd.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:32:48.370 02:27:57 blockdev_rbd -- common/autotest_common.sh@1142 -- # return 0 00:32:48.370 02:27:57 blockdev_rbd -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:32:48.370 02:27:57 blockdev_rbd -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:32:48.370 02:27:57 blockdev_rbd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:48.370 02:27:57 blockdev_rbd -- common/autotest_common.sh@10 -- # set +x 00:32:48.370 ************************************ 00:32:48.370 START TEST bdev_json_nonenclosed 00:32:48.370 ************************************ 00:32:48.370 02:27:57 blockdev_rbd.bdev_json_nonenclosed -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:32:48.630 [2024-07-23 02:27:57.202417] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:32:48.630 [2024-07-23 02:27:57.202638] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid126747 ] 00:32:48.630 [2024-07-23 02:27:57.366998] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:48.889 [2024-07-23 02:27:57.629882] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:48.889 [2024-07-23 02:27:57.630293] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:32:48.889 [2024-07-23 02:27:57.630357] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:32:48.889 [2024-07-23 02:27:57.630376] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:32:49.458 00:32:49.458 real 0m1.032s 00:32:49.458 user 0m0.751s 00:32:49.458 sys 0m0.173s 00:32:49.458 02:27:58 blockdev_rbd.bdev_json_nonenclosed -- common/autotest_common.sh@1123 -- # es=234 00:32:49.458 02:27:58 blockdev_rbd.bdev_json_nonenclosed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:49.458 02:27:58 blockdev_rbd.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:32:49.458 ************************************ 00:32:49.458 END TEST bdev_json_nonenclosed 00:32:49.458 ************************************ 00:32:49.458 02:27:58 blockdev_rbd -- common/autotest_common.sh@1142 -- # return 234 00:32:49.458 02:27:58 blockdev_rbd -- bdev/blockdev.sh@781 -- # true 00:32:49.458 02:27:58 blockdev_rbd -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:32:49.458 02:27:58 blockdev_rbd -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:32:49.458 02:27:58 blockdev_rbd -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:49.458 02:27:58 blockdev_rbd -- common/autotest_common.sh@10 -- # set +x 00:32:49.458 ************************************ 00:32:49.458 START TEST bdev_json_nonarray 00:32:49.458 ************************************ 00:32:49.458 02:27:58 blockdev_rbd.bdev_json_nonarray -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:32:49.717 [2024-07-23 02:27:58.289306] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:32:49.717 [2024-07-23 02:27:58.289583] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid126778 ] 00:32:49.717 [2024-07-23 02:27:58.451888] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:49.977 [2024-07-23 02:27:58.724306] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:49.977 [2024-07-23 02:27:58.724437] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:32:49.977 [2024-07-23 02:27:58.724484] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:32:49.977 [2024-07-23 02:27:58.724563] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:32:50.544 00:32:50.544 real 0m1.044s 00:32:50.544 user 0m0.766s 00:32:50.544 sys 0m0.169s 00:32:50.544 02:27:59 blockdev_rbd.bdev_json_nonarray -- common/autotest_common.sh@1123 -- # es=234 00:32:50.544 02:27:59 blockdev_rbd.bdev_json_nonarray -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:50.544 02:27:59 blockdev_rbd.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:32:50.545 ************************************ 00:32:50.545 END TEST bdev_json_nonarray 00:32:50.545 ************************************ 00:32:50.545 02:27:59 blockdev_rbd -- common/autotest_common.sh@1142 -- # return 234 00:32:50.545 02:27:59 blockdev_rbd -- bdev/blockdev.sh@784 -- # true 00:32:50.545 02:27:59 blockdev_rbd -- bdev/blockdev.sh@786 -- # [[ rbd == bdev ]] 00:32:50.545 02:27:59 blockdev_rbd -- bdev/blockdev.sh@793 -- # [[ rbd == gpt ]] 00:32:50.545 02:27:59 blockdev_rbd -- bdev/blockdev.sh@797 -- # [[ rbd == crypto_sw ]] 00:32:50.545 02:27:59 blockdev_rbd -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:32:50.545 02:27:59 blockdev_rbd -- bdev/blockdev.sh@810 -- # cleanup 00:32:50.545 02:27:59 blockdev_rbd -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:32:50.545 02:27:59 blockdev_rbd -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:32:50.545 02:27:59 blockdev_rbd -- bdev/blockdev.sh@26 -- # [[ rbd == rbd ]] 00:32:50.545 02:27:59 blockdev_rbd -- bdev/blockdev.sh@27 -- # rbd_cleanup 00:32:50.545 02:27:59 blockdev_rbd -- common/autotest_common.sh@1031 -- # hash ceph 00:32:50.545 02:27:59 blockdev_rbd -- common/autotest_common.sh@1032 -- # /home/vagrant/spdk_repo/spdk/scripts/ceph/stop.sh 00:32:50.545 + base_dir=/var/tmp/ceph 00:32:50.545 + image=/var/tmp/ceph/ceph_raw.img 00:32:50.545 + dev=/dev/loop200 00:32:50.545 + pkill -9 ceph 00:32:50.545 + sleep 3 00:32:53.832 + umount /dev/loop200p2 00:32:53.832 + losetup -d /dev/loop200 00:32:53.832 + rm -rf /var/tmp/ceph 00:32:53.832 02:28:02 blockdev_rbd -- common/autotest_common.sh@1033 -- # rm -f /var/tmp/ceph_raw.img 00:32:54.091 02:28:02 blockdev_rbd -- bdev/blockdev.sh@30 -- # [[ rbd == daos ]] 00:32:54.091 02:28:02 blockdev_rbd -- bdev/blockdev.sh@34 -- # [[ rbd = \g\p\t ]] 00:32:54.091 02:28:02 blockdev_rbd -- bdev/blockdev.sh@40 -- # [[ rbd == xnvme ]] 00:32:54.091 00:32:54.091 real 1m28.058s 00:32:54.091 user 1m44.785s 00:32:54.091 sys 0m13.632s 00:32:54.091 02:28:02 blockdev_rbd -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:54.091 02:28:02 blockdev_rbd -- common/autotest_common.sh@10 -- # set +x 00:32:54.091 ************************************ 00:32:54.091 END TEST blockdev_rbd 00:32:54.091 ************************************ 00:32:54.091 02:28:02 -- common/autotest_common.sh@1142 -- # return 0 00:32:54.091 02:28:02 -- spdk/autotest.sh@332 -- # run_test spdkcli_rbd /home/vagrant/spdk_repo/spdk/test/spdkcli/rbd.sh 00:32:54.091 02:28:02 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:32:54.091 02:28:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:54.091 02:28:02 -- common/autotest_common.sh@10 -- # set +x 00:32:54.091 ************************************ 00:32:54.091 START TEST spdkcli_rbd 00:32:54.091 ************************************ 00:32:54.091 02:28:02 spdkcli_rbd -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/rbd.sh 00:32:54.350 * Looking for test storage... 00:32:54.350 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:32:54.350 02:28:02 spdkcli_rbd -- spdkcli/rbd.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:32:54.350 02:28:02 spdkcli_rbd -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:32:54.350 02:28:02 spdkcli_rbd -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:32:54.350 02:28:02 spdkcli_rbd -- spdkcli/rbd.sh@11 -- # MATCH_FILE=spdkcli_rbd.test 00:32:54.350 02:28:02 spdkcli_rbd -- spdkcli/rbd.sh@12 -- # SPDKCLI_BRANCH=/bdevs/rbd 00:32:54.350 02:28:02 spdkcli_rbd -- spdkcli/rbd.sh@14 -- # trap 'rbd_cleanup; cleanup' EXIT 00:32:54.350 02:28:02 spdkcli_rbd -- spdkcli/rbd.sh@15 -- # timing_enter run_spdk_tgt 00:32:54.350 02:28:02 spdkcli_rbd -- common/autotest_common.sh@722 -- # xtrace_disable 00:32:54.350 02:28:02 spdkcli_rbd -- common/autotest_common.sh@10 -- # set +x 00:32:54.350 02:28:02 spdkcli_rbd -- spdkcli/rbd.sh@16 -- # run_spdk_tgt 00:32:54.350 02:28:02 spdkcli_rbd -- spdkcli/common.sh@27 -- # spdk_tgt_pid=126892 00:32:54.350 02:28:02 spdkcli_rbd -- spdkcli/common.sh@28 -- # waitforlisten 126892 00:32:54.350 02:28:02 spdkcli_rbd -- common/autotest_common.sh@829 -- # '[' -z 126892 ']' 00:32:54.350 02:28:02 spdkcli_rbd -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:54.350 02:28:02 spdkcli_rbd -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:54.350 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:54.350 02:28:02 spdkcli_rbd -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:54.350 02:28:02 spdkcli_rbd -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:54.350 02:28:02 spdkcli_rbd -- common/autotest_common.sh@10 -- # set +x 00:32:54.350 02:28:02 spdkcli_rbd -- spdkcli/common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:32:54.350 [2024-07-23 02:28:03.091736] Starting SPDK v24.09-pre git sha1 f7b31b2b9 / DPDK 24.03.0 initialization... 00:32:54.350 [2024-07-23 02:28:03.091967] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid126892 ] 00:32:54.609 [2024-07-23 02:28:03.272032] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:32:54.869 [2024-07-23 02:28:03.546733] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:54.869 [2024-07-23 02:28:03.546746] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:55.831 02:28:04 spdkcli_rbd -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:55.831 02:28:04 spdkcli_rbd -- common/autotest_common.sh@862 -- # return 0 00:32:55.831 02:28:04 spdkcli_rbd -- spdkcli/rbd.sh@17 -- # timing_exit run_spdk_tgt 00:32:55.831 02:28:04 spdkcli_rbd -- common/autotest_common.sh@728 -- # xtrace_disable 00:32:55.831 02:28:04 spdkcli_rbd -- common/autotest_common.sh@10 -- # set +x 00:32:55.831 02:28:04 spdkcli_rbd -- spdkcli/rbd.sh@19 -- # timing_enter spdkcli_create_rbd_config 00:32:55.831 02:28:04 spdkcli_rbd -- common/autotest_common.sh@722 -- # xtrace_disable 00:32:55.831 02:28:04 spdkcli_rbd -- common/autotest_common.sh@10 -- # set +x 00:32:55.831 02:28:04 spdkcli_rbd -- spdkcli/rbd.sh@20 -- # rbd_cleanup 00:32:55.831 02:28:04 spdkcli_rbd -- common/autotest_common.sh@1031 -- # hash ceph 00:32:55.831 02:28:04 spdkcli_rbd -- common/autotest_common.sh@1032 -- # /home/vagrant/spdk_repo/spdk/scripts/ceph/stop.sh 00:32:55.831 + base_dir=/var/tmp/ceph 00:32:55.831 + image=/var/tmp/ceph/ceph_raw.img 00:32:55.831 + dev=/dev/loop200 00:32:55.831 + pkill -9 ceph 00:32:55.831 + sleep 3 00:32:59.118 + umount /dev/loop200p2 00:32:59.118 umount: /dev/loop200p2: no mount point specified. 00:32:59.118 + losetup -d /dev/loop200 00:32:59.118 losetup: /dev/loop200: detach failed: No such device or address 00:32:59.118 + rm -rf /var/tmp/ceph 00:32:59.118 02:28:07 spdkcli_rbd -- common/autotest_common.sh@1033 -- # rm -f /var/tmp/ceph_raw.img 00:32:59.118 02:28:07 spdkcli_rbd -- spdkcli/rbd.sh@21 -- # rbd_setup 127.0.0.1 00:32:59.118 02:28:07 spdkcli_rbd -- common/autotest_common.sh@1005 -- # '[' -z 127.0.0.1 ']' 00:32:59.118 02:28:07 spdkcli_rbd -- common/autotest_common.sh@1009 -- # '[' -n '' ']' 00:32:59.118 02:28:07 spdkcli_rbd -- common/autotest_common.sh@1018 -- # hash ceph 00:32:59.118 02:28:07 spdkcli_rbd -- common/autotest_common.sh@1019 -- # export PG_NUM=128 00:32:59.119 02:28:07 spdkcli_rbd -- common/autotest_common.sh@1019 -- # PG_NUM=128 00:32:59.119 02:28:07 spdkcli_rbd -- common/autotest_common.sh@1020 -- # export RBD_POOL=rbd 00:32:59.119 02:28:07 spdkcli_rbd -- common/autotest_common.sh@1020 -- # RBD_POOL=rbd 00:32:59.119 02:28:07 spdkcli_rbd -- common/autotest_common.sh@1021 -- # export RBD_NAME=foo 00:32:59.119 02:28:07 spdkcli_rbd -- common/autotest_common.sh@1021 -- # RBD_NAME=foo 00:32:59.119 02:28:07 spdkcli_rbd -- common/autotest_common.sh@1022 -- # /home/vagrant/spdk_repo/spdk/scripts/ceph/stop.sh 00:32:59.119 + base_dir=/var/tmp/ceph 00:32:59.119 + image=/var/tmp/ceph/ceph_raw.img 00:32:59.119 + dev=/dev/loop200 00:32:59.119 + pkill -9 ceph 00:32:59.119 + sleep 3 00:33:01.683 + umount /dev/loop200p2 00:33:01.683 umount: /dev/loop200p2: no mount point specified. 00:33:01.683 + losetup -d /dev/loop200 00:33:01.683 losetup: /dev/loop200: detach failed: No such device or address 00:33:01.683 + rm -rf /var/tmp/ceph 00:33:01.683 02:28:10 spdkcli_rbd -- common/autotest_common.sh@1023 -- # /home/vagrant/spdk_repo/spdk/scripts/ceph/start.sh 127.0.0.1 00:33:01.683 + set -e 00:33:01.683 +++ dirname /home/vagrant/spdk_repo/spdk/scripts/ceph/start.sh 00:33:01.683 ++ readlink -f /home/vagrant/spdk_repo/spdk/scripts/ceph 00:33:01.683 + script_dir=/home/vagrant/spdk_repo/spdk/scripts/ceph 00:33:01.683 + base_dir=/var/tmp/ceph 00:33:01.683 + mon_ip=127.0.0.1 00:33:01.683 + mon_dir=/var/tmp/ceph/mon.a 00:33:01.683 + pid_dir=/var/tmp/ceph/pid 00:33:01.683 + ceph_conf=/var/tmp/ceph/ceph.conf 00:33:01.683 + mnt_dir=/var/tmp/ceph/mnt 00:33:01.683 + image=/var/tmp/ceph_raw.img 00:33:01.683 + dev=/dev/loop200 00:33:01.683 + modprobe loop 00:33:01.683 + umount /dev/loop200p2 00:33:01.683 umount: /dev/loop200p2: no mount point specified. 00:33:01.683 + true 00:33:01.683 + losetup -d /dev/loop200 00:33:01.683 losetup: /dev/loop200: detach failed: No such device or address 00:33:01.683 + true 00:33:01.683 + '[' -d /var/tmp/ceph ']' 00:33:01.683 + mkdir /var/tmp/ceph 00:33:01.683 + cp /home/vagrant/spdk_repo/spdk/scripts/ceph/ceph.conf /var/tmp/ceph/ceph.conf 00:33:01.683 + '[' '!' -e /var/tmp/ceph_raw.img ']' 00:33:01.683 + fallocate -l 4G /var/tmp/ceph_raw.img 00:33:01.941 + mknod /dev/loop200 b 7 200 00:33:01.941 mknod: /dev/loop200: File exists 00:33:01.941 + true 00:33:01.941 + losetup /dev/loop200 /var/tmp/ceph_raw.img 00:33:01.941 + PARTED='parted -s' 00:33:01.941 + SGDISK=sgdisk 00:33:01.941 + echo 'Partitioning /dev/loop200' 00:33:01.941 Partitioning /dev/loop200 00:33:01.941 + parted -s /dev/loop200 mktable gpt 00:33:01.941 + sleep 2 00:33:04.475 + parted -s /dev/loop200 mkpart primary 0% 2GiB 00:33:04.475 + parted -s /dev/loop200 mkpart primary 2GiB 100% 00:33:04.475 Setting name on /dev/loop200 00:33:04.475 + partno=0 00:33:04.475 + echo 'Setting name on /dev/loop200' 00:33:04.475 + sgdisk -c 1:osd-device-0-journal /dev/loop200 00:33:05.040 Warning: The kernel is still using the old partition table. 00:33:05.040 The new table will be used at the next reboot or after you 00:33:05.040 run partprobe(8) or kpartx(8) 00:33:05.040 The operation has completed successfully. 00:33:05.040 + sgdisk -c 2:osd-device-0-data /dev/loop200 00:33:05.976 Warning: The kernel is still using the old partition table. 00:33:05.976 The new table will be used at the next reboot or after you 00:33:05.976 run partprobe(8) or kpartx(8) 00:33:05.976 The operation has completed successfully. 00:33:05.976 + kpartx /dev/loop200 00:33:05.976 loop200p1 : 0 4192256 /dev/loop200 2048 00:33:05.976 loop200p2 : 0 4192256 /dev/loop200 4194304 00:33:05.976 ++ ceph -v 00:33:05.976 ++ awk '{print $3}' 00:33:06.235 + ceph_version=17.2.7 00:33:06.235 + ceph_maj=17 00:33:06.235 + '[' 17 -gt 12 ']' 00:33:06.235 + update_config=true 00:33:06.235 + rm -f /var/log/ceph/ceph-mon.a.log 00:33:06.235 + set_min_mon_release='--set-min-mon-release 14' 00:33:06.235 + ceph_osd_extra_config='--check-needs-journal --no-mon-config' 00:33:06.235 + mnt_pt=/var/tmp/ceph/mnt/osd-device-0-data 00:33:06.235 + mkdir -p /var/tmp/ceph/mnt/osd-device-0-data 00:33:06.235 + mkfs.xfs -f /dev/disk/by-partlabel/osd-device-0-data 00:33:06.235 meta-data=/dev/disk/by-partlabel/osd-device-0-data isize=512 agcount=4, agsize=131008 blks 00:33:06.235 = sectsz=512 attr=2, projid32bit=1 00:33:06.235 = crc=1 finobt=1, sparse=1, rmapbt=0 00:33:06.235 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:33:06.235 data = bsize=4096 blocks=524032, imaxpct=25 00:33:06.235 = sunit=0 swidth=0 blks 00:33:06.235 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:33:06.235 log =internal log bsize=4096 blocks=16384, version=2 00:33:06.235 = sectsz=512 sunit=0 blks, lazy-count=1 00:33:06.235 realtime =none extsz=4096 blocks=0, rtextents=0 00:33:06.235 Discarding blocks...Done. 00:33:06.235 + mount /dev/disk/by-partlabel/osd-device-0-data /var/tmp/ceph/mnt/osd-device-0-data 00:33:06.235 + cat 00:33:06.235 + rm -rf '/var/tmp/ceph/mon.a/*' 00:33:06.235 + mkdir -p /var/tmp/ceph/mon.a 00:33:06.235 + mkdir -p /var/tmp/ceph/pid 00:33:06.235 + rm -f /etc/ceph/ceph.client.admin.keyring 00:33:06.235 + ceph-authtool --create-keyring --gen-key --name=mon. /var/tmp/ceph/keyring --cap mon 'allow *' 00:33:06.235 creating /var/tmp/ceph/keyring 00:33:06.236 + ceph-authtool --gen-key --name=client.admin --cap mon 'allow *' --cap osd 'allow *' --cap mds 'allow *' --cap mgr 'allow *' /var/tmp/ceph/keyring 00:33:06.236 + monmaptool --create --clobber --add a 127.0.0.1:12046 --print /var/tmp/ceph/monmap --set-min-mon-release 14 00:33:06.494 monmaptool: monmap file /var/tmp/ceph/monmap 00:33:06.494 monmaptool: generated fsid 6bfef57a-45a7-4e8b-b8b3-fc206cbc225c 00:33:06.494 setting min_mon_release = octopus 00:33:06.494 epoch 0 00:33:06.494 fsid 6bfef57a-45a7-4e8b-b8b3-fc206cbc225c 00:33:06.494 last_changed 2024-07-23T02:28:15.021891+0000 00:33:06.494 created 2024-07-23T02:28:15.021891+0000 00:33:06.494 min_mon_release 15 (octopus) 00:33:06.495 election_strategy: 1 00:33:06.495 0: v2:127.0.0.1:12046/0 mon.a 00:33:06.495 monmaptool: writing epoch 0 to /var/tmp/ceph/monmap (1 monitors) 00:33:06.495 + sh -c 'ulimit -c unlimited && exec ceph-mon --mkfs -c /var/tmp/ceph/ceph.conf -i a --monmap=/var/tmp/ceph/monmap --keyring=/var/tmp/ceph/keyring --mon-data=/var/tmp/ceph/mon.a' 00:33:06.495 + '[' true = true ']' 00:33:06.495 + sed -i 's/mon addr = /mon addr = v2:/g' /var/tmp/ceph/ceph.conf 00:33:06.495 + cp /var/tmp/ceph/keyring /var/tmp/ceph/mon.a/keyring 00:33:06.495 + cp /var/tmp/ceph/ceph.conf /etc/ceph/ceph.conf 00:33:06.495 + cp /var/tmp/ceph/keyring /etc/ceph/keyring 00:33:06.495 + cp /var/tmp/ceph/keyring /etc/ceph/ceph.client.admin.keyring 00:33:06.495 + chmod a+r /etc/ceph/ceph.client.admin.keyring 00:33:06.495 ++ hostname 00:33:06.495 + ceph-run sh -c 'ulimit -n 16384 && ulimit -c unlimited && exec ceph-mon -c /var/tmp/ceph/ceph.conf -i a --keyring=/var/tmp/ceph/keyring --pid-file=/var/tmp/ceph/pid/root@fedora38-cloud-1716830599-074-updated-1705279005.pid --mon-data=/var/tmp/ceph/mon.a' 00:33:06.495 + true 00:33:06.495 + '[' true = true ']' 00:33:06.495 + ceph-conf --name mon.a --show-config-value log_file 00:33:06.495 /var/log/ceph/ceph-mon.a.log 00:33:06.495 ++ ceph -s 00:33:06.495 ++ grep id 00:33:06.495 ++ awk '{print $2}' 00:33:06.754 + fsid=6bfef57a-45a7-4e8b-b8b3-fc206cbc225c 00:33:06.754 + sed -i 's/perf = true/perf = true\n\tfsid = 6bfef57a-45a7-4e8b-b8b3-fc206cbc225c \n/g' /var/tmp/ceph/ceph.conf 00:33:06.754 + (( ceph_maj < 18 )) 00:33:06.754 + sed -i 's/perf = true/perf = true\n\tosd objectstore = filestore\n/g' /var/tmp/ceph/ceph.conf 00:33:06.754 + cat /var/tmp/ceph/ceph.conf 00:33:06.754 [global] 00:33:06.754 debug_lockdep = 0/0 00:33:06.754 debug_context = 0/0 00:33:06.754 debug_crush = 0/0 00:33:06.754 debug_buffer = 0/0 00:33:06.754 debug_timer = 0/0 00:33:06.754 debug_filer = 0/0 00:33:06.754 debug_objecter = 0/0 00:33:06.754 debug_rados = 0/0 00:33:06.754 debug_rbd = 0/0 00:33:06.754 debug_ms = 0/0 00:33:06.754 debug_monc = 0/0 00:33:06.754 debug_tp = 0/0 00:33:06.754 debug_auth = 0/0 00:33:06.754 debug_finisher = 0/0 00:33:06.754 debug_heartbeatmap = 0/0 00:33:06.754 debug_perfcounter = 0/0 00:33:06.754 debug_asok = 0/0 00:33:06.754 debug_throttle = 0/0 00:33:06.754 debug_mon = 0/0 00:33:06.754 debug_paxos = 0/0 00:33:06.754 debug_rgw = 0/0 00:33:06.754 00:33:06.754 perf = true 00:33:06.754 osd objectstore = filestore 00:33:06.754 00:33:06.754 fsid = 6bfef57a-45a7-4e8b-b8b3-fc206cbc225c 00:33:06.754 00:33:06.754 mutex_perf_counter = false 00:33:06.754 throttler_perf_counter = false 00:33:06.754 rbd cache = false 00:33:06.754 mon_allow_pool_delete = true 00:33:06.754 00:33:06.754 osd_pool_default_size = 1 00:33:06.754 00:33:06.754 [mon] 00:33:06.754 mon_max_pool_pg_num=166496 00:33:06.754 mon_osd_max_split_count = 10000 00:33:06.754 mon_pg_warn_max_per_osd = 10000 00:33:06.754 00:33:06.754 [osd] 00:33:06.754 osd_op_threads = 64 00:33:06.754 filestore_queue_max_ops=5000 00:33:06.754 filestore_queue_committing_max_ops=5000 00:33:06.754 journal_max_write_entries=1000 00:33:06.754 journal_queue_max_ops=3000 00:33:06.754 objecter_inflight_ops=102400 00:33:06.754 filestore_wbthrottle_enable=false 00:33:06.754 filestore_queue_max_bytes=1048576000 00:33:06.754 filestore_queue_committing_max_bytes=1048576000 00:33:06.754 journal_max_write_bytes=1048576000 00:33:06.754 journal_queue_max_bytes=1048576000 00:33:06.754 ms_dispatch_throttle_bytes=1048576000 00:33:06.754 objecter_inflight_op_bytes=1048576000 00:33:06.754 filestore_max_sync_interval=10 00:33:06.754 osd_client_message_size_cap = 0 00:33:06.754 osd_client_message_cap = 0 00:33:06.754 osd_enable_op_tracker = false 00:33:06.754 filestore_fd_cache_size = 10240 00:33:06.754 filestore_fd_cache_shards = 64 00:33:06.754 filestore_op_threads = 16 00:33:06.754 osd_op_num_shards = 48 00:33:06.754 osd_op_num_threads_per_shard = 2 00:33:06.754 osd_pg_object_context_cache_count = 10240 00:33:06.754 filestore_odsync_write = True 00:33:06.754 journal_dynamic_throttle = True 00:33:06.754 00:33:06.754 [osd.0] 00:33:06.754 osd data = /var/tmp/ceph/mnt/osd-device-0-data 00:33:06.754 osd journal = /dev/disk/by-partlabel/osd-device-0-journal 00:33:06.754 00:33:06.754 # add mon address 00:33:06.754 [mon.a] 00:33:06.754 mon addr = v2:127.0.0.1:12046 00:33:06.754 + i=0 00:33:06.754 + mkdir -p /var/tmp/ceph/mnt 00:33:06.754 ++ uuidgen 00:33:06.754 + uuid=31a55a0b-c97d-4352-908b-162fcef98af1 00:33:06.754 + ceph -c /var/tmp/ceph/ceph.conf osd create 31a55a0b-c97d-4352-908b-162fcef98af1 0 00:33:07.322 0 00:33:07.323 + ceph-osd -c /var/tmp/ceph/ceph.conf -i 0 --mkfs --mkkey --osd-uuid 31a55a0b-c97d-4352-908b-162fcef98af1 --check-needs-journal --no-mon-config 00:33:07.323 2024-07-23T02:28:15.851+0000 7fbd93e78400 -1 auth: error reading file: /var/tmp/ceph/mnt/osd-device-0-data/keyring: can't open /var/tmp/ceph/mnt/osd-device-0-data/keyring: (2) No such file or directory 00:33:07.323 2024-07-23T02:28:15.852+0000 7fbd93e78400 -1 created new key in keyring /var/tmp/ceph/mnt/osd-device-0-data/keyring 00:33:07.323 2024-07-23T02:28:15.896+0000 7fbd93e78400 -1 journal check: ondisk fsid 00000000-0000-0000-0000-000000000000 doesn't match expected 31a55a0b-c97d-4352-908b-162fcef98af1, invalid (someone else's?) journal 00:33:07.323 2024-07-23T02:28:15.928+0000 7fbd93e78400 -1 journal do_read_entry(4096): bad header magic 00:33:07.323 2024-07-23T02:28:15.928+0000 7fbd93e78400 -1 journal do_read_entry(4096): bad header magic 00:33:07.323 ++ hostname 00:33:07.323 + ceph -c /var/tmp/ceph/ceph.conf osd crush add osd.0 1.0 host=fedora38-cloud-1716830599-074-updated-1705279005 root=default 00:33:08.699 add item id 0 name 'osd.0' weight 1 at location {host=fedora38-cloud-1716830599-074-updated-1705279005,root=default} to crush map 00:33:08.699 + ceph -c /var/tmp/ceph/ceph.conf -i /var/tmp/ceph/mnt/osd-device-0-data/keyring auth add osd.0 osd 'allow *' mon 'allow profile osd' mgr 'allow *' 00:33:08.958 added key for osd.0 00:33:08.958 ++ ceph -c /var/tmp/ceph/ceph.conf config get osd osd_class_dir 00:33:09.217 + class_dir=/lib64/rados-classes 00:33:09.217 + [[ -e /lib64/rados-classes ]] 00:33:09.217 + ceph -c /var/tmp/ceph/ceph.conf config set osd osd_class_dir /lib64/rados-classes 00:33:09.476 + pkill -9 ceph-osd 00:33:09.476 + true 00:33:09.476 + sleep 2 00:33:11.379 + mkdir -p /var/tmp/ceph/pid 00:33:11.379 + env -i TCMALLOC_MAX_TOTAL_THREAD_CACHE_BYTES=134217728 ceph-osd -c /var/tmp/ceph/ceph.conf -i 0 --pid-file=/var/tmp/ceph/pid/ceph-osd.0.pid 00:33:11.379 2024-07-23T02:28:20.088+0000 7fb74b06c400 -1 Falling back to public interface 00:33:11.379 2024-07-23T02:28:20.128+0000 7fb74b06c400 -1 journal do_read_entry(8192): bad header magic 00:33:11.379 2024-07-23T02:28:20.128+0000 7fb74b06c400 -1 journal do_read_entry(8192): bad header magic 00:33:11.379 2024-07-23T02:28:20.136+0000 7fb74b06c400 -1 osd.0 0 log_to_monitors true 00:33:11.638 02:28:20 spdkcli_rbd -- common/autotest_common.sh@1025 -- # ceph osd pool create rbd 128 00:33:12.573 pool 'rbd' created 00:33:12.573 02:28:21 spdkcli_rbd -- common/autotest_common.sh@1026 -- # rbd create foo --size 1000 00:33:15.894 02:28:24 spdkcli_rbd -- spdkcli/rbd.sh@22 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py '"/bdevs/rbd create rbd foo 512'\'' '\''Ceph0'\'' True "/bdevs/rbd' create rbd foo 512 Ceph1 'True 00:33:15.894 timing_exit spdkcli_create_rbd_config 00:33:15.894 00:33:15.894 timing_enter spdkcli_check_match 00:33:15.894 check_match 00:33:15.894 timing_exit spdkcli_check_match 00:33:15.894 00:33:15.894 timing_enter spdkcli_clear_rbd_config 00:33:15.894 /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py "/bdevs/rbd' delete Ceph0 Ceph0 '"/bdevs/rbd delete_all'\'' '\''Ceph1'\'' ' 00:33:16.461 Executing command: [' ', True] 00:33:16.461 02:28:25 spdkcli_rbd -- spdkcli/rbd.sh@31 -- # rbd_cleanup 00:33:16.461 02:28:25 spdkcli_rbd -- common/autotest_common.sh@1031 -- # hash ceph 00:33:16.461 02:28:25 spdkcli_rbd -- common/autotest_common.sh@1032 -- # /home/vagrant/spdk_repo/spdk/scripts/ceph/stop.sh 00:33:16.461 + base_dir=/var/tmp/ceph 00:33:16.461 + image=/var/tmp/ceph/ceph_raw.img 00:33:16.461 + dev=/dev/loop200 00:33:16.461 + pkill -9 ceph 00:33:16.461 + sleep 3 00:33:19.747 + umount /dev/loop200p2 00:33:19.747 + losetup -d /dev/loop200 00:33:19.747 + rm -rf /var/tmp/ceph 00:33:19.747 02:28:28 spdkcli_rbd -- common/autotest_common.sh@1033 -- # rm -f /var/tmp/ceph_raw.img 00:33:19.747 02:28:28 spdkcli_rbd -- spdkcli/rbd.sh@32 -- # timing_exit spdkcli_clear_rbd_config 00:33:19.747 02:28:28 spdkcli_rbd -- common/autotest_common.sh@728 -- # xtrace_disable 00:33:19.747 02:28:28 spdkcli_rbd -- common/autotest_common.sh@10 -- # set +x 00:33:19.747 02:28:28 spdkcli_rbd -- spdkcli/rbd.sh@34 -- # killprocess 126892 00:33:19.747 02:28:28 spdkcli_rbd -- common/autotest_common.sh@948 -- # '[' -z 126892 ']' 00:33:19.747 02:28:28 spdkcli_rbd -- common/autotest_common.sh@952 -- # kill -0 126892 00:33:19.747 02:28:28 spdkcli_rbd -- common/autotest_common.sh@953 -- # uname 00:33:19.747 02:28:28 spdkcli_rbd -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:19.747 02:28:28 spdkcli_rbd -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 126892 00:33:19.747 02:28:28 spdkcli_rbd -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:33:19.747 02:28:28 spdkcli_rbd -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:33:19.747 02:28:28 spdkcli_rbd -- common/autotest_common.sh@966 -- # echo 'killing process with pid 126892' 00:33:19.747 killing process with pid 126892 00:33:19.747 02:28:28 spdkcli_rbd -- common/autotest_common.sh@967 -- # kill 126892 00:33:19.747 02:28:28 spdkcli_rbd -- common/autotest_common.sh@972 -- # wait 126892 00:33:21.648 02:28:30 spdkcli_rbd -- spdkcli/rbd.sh@1 -- # rbd_cleanup 00:33:21.648 02:28:30 spdkcli_rbd -- common/autotest_common.sh@1031 -- # hash ceph 00:33:21.648 02:28:30 spdkcli_rbd -- common/autotest_common.sh@1032 -- # /home/vagrant/spdk_repo/spdk/scripts/ceph/stop.sh 00:33:21.648 + base_dir=/var/tmp/ceph 00:33:21.648 + image=/var/tmp/ceph/ceph_raw.img 00:33:21.648 + dev=/dev/loop200 00:33:21.648 + pkill -9 ceph 00:33:21.648 + sleep 3 00:33:25.051 + umount /dev/loop200p2 00:33:25.051 umount: /dev/loop200p2: no mount point specified. 00:33:25.051 + losetup -d /dev/loop200 00:33:25.051 losetup: /dev/loop200: detach failed: No such device or address 00:33:25.051 + rm -rf /var/tmp/ceph 00:33:25.051 02:28:33 spdkcli_rbd -- common/autotest_common.sh@1033 -- # rm -f /var/tmp/ceph_raw.img 00:33:25.051 02:28:33 spdkcli_rbd -- spdkcli/rbd.sh@1 -- # cleanup 00:33:25.051 02:28:33 spdkcli_rbd -- spdkcli/common.sh@10 -- # '[' -n 126892 ']' 00:33:25.051 02:28:33 spdkcli_rbd -- spdkcli/common.sh@11 -- # killprocess 126892 00:33:25.051 Process with pid 126892 is not found 00:33:25.051 02:28:33 spdkcli_rbd -- common/autotest_common.sh@948 -- # '[' -z 126892 ']' 00:33:25.052 02:28:33 spdkcli_rbd -- common/autotest_common.sh@952 -- # kill -0 126892 00:33:25.052 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 952: kill: (126892) - No such process 00:33:25.052 02:28:33 spdkcli_rbd -- common/autotest_common.sh@975 -- # echo 'Process with pid 126892 is not found' 00:33:25.052 02:28:33 spdkcli_rbd -- spdkcli/common.sh@13 -- # '[' -n '' ']' 00:33:25.052 02:28:33 spdkcli_rbd -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:33:25.052 02:28:33 spdkcli_rbd -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:33:25.052 02:28:33 spdkcli_rbd -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_rbd.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:33:25.052 00:33:25.052 real 0m30.347s 00:33:25.052 user 0m55.378s 00:33:25.052 sys 0m1.587s 00:33:25.052 02:28:33 spdkcli_rbd -- common/autotest_common.sh@1124 -- # xtrace_disable 00:33:25.052 ************************************ 00:33:25.052 END TEST spdkcli_rbd 00:33:25.052 ************************************ 00:33:25.052 02:28:33 spdkcli_rbd -- common/autotest_common.sh@10 -- # set +x 00:33:25.052 02:28:33 -- common/autotest_common.sh@1142 -- # return 0 00:33:25.052 02:28:33 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:33:25.052 02:28:33 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:33:25.052 02:28:33 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:33:25.052 02:28:33 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:33:25.052 02:28:33 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:33:25.052 02:28:33 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:33:25.052 02:28:33 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:33:25.052 02:28:33 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:33:25.052 02:28:33 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:33:25.052 02:28:33 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:33:25.052 02:28:33 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:33:25.052 02:28:33 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:33:25.052 02:28:33 -- common/autotest_common.sh@722 -- # xtrace_disable 00:33:25.052 02:28:33 -- common/autotest_common.sh@10 -- # set +x 00:33:25.052 02:28:33 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:33:25.052 02:28:33 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:33:25.052 02:28:33 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:33:25.052 02:28:33 -- common/autotest_common.sh@10 -- # set +x 00:33:26.430 INFO: APP EXITING 00:33:26.430 INFO: killing all VMs 00:33:26.430 INFO: killing vhost app 00:33:26.430 INFO: EXIT DONE 00:33:26.689 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:33:26.689 Waiting for block devices as requested 00:33:26.689 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:33:26.689 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:33:27.627 0000:00:10.0 (1b36 0010): Active devices: data@nvme1n1, so not binding PCI dev 00:33:27.627 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:33:27.627 Cleaning 00:33:27.627 Removing: /var/run/dpdk/spdk0/config 00:33:27.627 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:33:27.627 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:33:27.627 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:33:27.627 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:33:27.627 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:33:27.627 Removing: /var/run/dpdk/spdk0/hugepage_info 00:33:27.627 Removing: /var/run/dpdk/spdk1/config 00:33:27.627 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:33:27.627 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:33:27.627 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:33:27.627 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:33:27.627 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:33:27.627 Removing: /var/run/dpdk/spdk1/hugepage_info 00:33:27.627 Removing: /dev/shm/iscsi_trace.pid77656 00:33:27.627 Removing: /dev/shm/spdk_tgt_trace.pid58769 00:33:27.627 Removing: /var/run/dpdk/spdk0 00:33:27.627 Removing: /var/run/dpdk/spdk1 00:33:27.627 Removing: /var/run/dpdk/spdk_pid122920 00:33:27.627 Removing: /var/run/dpdk/spdk_pid123234 00:33:27.627 Removing: /var/run/dpdk/spdk_pid123284 00:33:27.627 Removing: /var/run/dpdk/spdk_pid123370 00:33:27.627 Removing: /var/run/dpdk/spdk_pid123446 00:33:27.627 Removing: /var/run/dpdk/spdk_pid123519 00:33:27.627 Removing: /var/run/dpdk/spdk_pid123710 00:33:27.627 Removing: /var/run/dpdk/spdk_pid123755 00:33:27.627 Removing: /var/run/dpdk/spdk_pid123788 00:33:27.627 Removing: /var/run/dpdk/spdk_pid123821 00:33:27.627 Removing: /var/run/dpdk/spdk_pid123854 00:33:27.627 Removing: /var/run/dpdk/spdk_pid123956 00:33:27.627 Removing: /var/run/dpdk/spdk_pid123999 00:33:27.627 Removing: /var/run/dpdk/spdk_pid124247 00:33:27.627 Removing: /var/run/dpdk/spdk_pid124556 00:33:27.627 Removing: /var/run/dpdk/spdk_pid124812 00:33:27.627 Removing: /var/run/dpdk/spdk_pid125699 00:33:27.627 Removing: /var/run/dpdk/spdk_pid125755 00:33:27.627 Removing: /var/run/dpdk/spdk_pid126040 00:33:27.627 Removing: /var/run/dpdk/spdk_pid126236 00:33:27.627 Removing: /var/run/dpdk/spdk_pid126412 00:33:27.627 Removing: /var/run/dpdk/spdk_pid126559 00:33:27.627 Removing: /var/run/dpdk/spdk_pid126669 00:33:27.627 Removing: /var/run/dpdk/spdk_pid126747 00:33:27.627 Removing: /var/run/dpdk/spdk_pid126778 00:33:27.627 Removing: /var/run/dpdk/spdk_pid126892 00:33:27.627 Removing: /var/run/dpdk/spdk_pid58558 00:33:27.627 Removing: /var/run/dpdk/spdk_pid58769 00:33:27.627 Removing: /var/run/dpdk/spdk_pid58992 00:33:27.627 Removing: /var/run/dpdk/spdk_pid59096 00:33:27.627 Removing: /var/run/dpdk/spdk_pid59141 00:33:27.627 Removing: /var/run/dpdk/spdk_pid59269 00:33:27.627 Removing: /var/run/dpdk/spdk_pid59297 00:33:27.627 Removing: /var/run/dpdk/spdk_pid59441 00:33:27.627 Removing: /var/run/dpdk/spdk_pid59635 00:33:27.627 Removing: /var/run/dpdk/spdk_pid59830 00:33:27.627 Removing: /var/run/dpdk/spdk_pid59927 00:33:27.627 Removing: /var/run/dpdk/spdk_pid60025 00:33:27.627 Removing: /var/run/dpdk/spdk_pid60133 00:33:27.627 Removing: /var/run/dpdk/spdk_pid60228 00:33:27.627 Removing: /var/run/dpdk/spdk_pid60267 00:33:27.627 Removing: /var/run/dpdk/spdk_pid60304 00:33:27.627 Removing: /var/run/dpdk/spdk_pid60372 00:33:27.627 Removing: /var/run/dpdk/spdk_pid60478 00:33:27.627 Removing: /var/run/dpdk/spdk_pid60917 00:33:27.887 Removing: /var/run/dpdk/spdk_pid60987 00:33:27.887 Removing: /var/run/dpdk/spdk_pid61055 00:33:27.887 Removing: /var/run/dpdk/spdk_pid61071 00:33:27.887 Removing: /var/run/dpdk/spdk_pid61208 00:33:27.887 Removing: /var/run/dpdk/spdk_pid61224 00:33:27.887 Removing: /var/run/dpdk/spdk_pid61359 00:33:27.887 Removing: /var/run/dpdk/spdk_pid61377 00:33:27.887 Removing: /var/run/dpdk/spdk_pid61441 00:33:27.887 Removing: /var/run/dpdk/spdk_pid61470 00:33:27.887 Removing: /var/run/dpdk/spdk_pid61523 00:33:27.887 Removing: /var/run/dpdk/spdk_pid61547 00:33:27.887 Removing: /var/run/dpdk/spdk_pid61729 00:33:27.887 Removing: /var/run/dpdk/spdk_pid61766 00:33:27.887 Removing: /var/run/dpdk/spdk_pid61847 00:33:27.887 Removing: /var/run/dpdk/spdk_pid61917 00:33:27.887 Removing: /var/run/dpdk/spdk_pid61953 00:33:27.887 Removing: /var/run/dpdk/spdk_pid62026 00:33:27.887 Removing: /var/run/dpdk/spdk_pid62068 00:33:27.887 Removing: /var/run/dpdk/spdk_pid62119 00:33:27.887 Removing: /var/run/dpdk/spdk_pid62160 00:33:27.887 Removing: /var/run/dpdk/spdk_pid62201 00:33:27.887 Removing: /var/run/dpdk/spdk_pid62248 00:33:27.887 Removing: /var/run/dpdk/spdk_pid62294 00:33:27.887 Removing: /var/run/dpdk/spdk_pid62335 00:33:27.887 Removing: /var/run/dpdk/spdk_pid62382 00:33:27.887 Removing: /var/run/dpdk/spdk_pid62427 00:33:27.887 Removing: /var/run/dpdk/spdk_pid62469 00:33:27.887 Removing: /var/run/dpdk/spdk_pid62516 00:33:27.887 Removing: /var/run/dpdk/spdk_pid62557 00:33:27.887 Removing: /var/run/dpdk/spdk_pid62602 00:33:27.887 Removing: /var/run/dpdk/spdk_pid62650 00:33:27.887 Removing: /var/run/dpdk/spdk_pid62691 00:33:27.887 Removing: /var/run/dpdk/spdk_pid62732 00:33:27.887 Removing: /var/run/dpdk/spdk_pid62787 00:33:27.887 Removing: /var/run/dpdk/spdk_pid62831 00:33:27.887 Removing: /var/run/dpdk/spdk_pid62878 00:33:27.887 Removing: /var/run/dpdk/spdk_pid62925 00:33:27.887 Removing: /var/run/dpdk/spdk_pid63008 00:33:27.887 Removing: /var/run/dpdk/spdk_pid63121 00:33:27.887 Removing: /var/run/dpdk/spdk_pid63471 00:33:27.887 Removing: /var/run/dpdk/spdk_pid63491 00:33:27.887 Removing: /var/run/dpdk/spdk_pid63522 00:33:27.887 Removing: /var/run/dpdk/spdk_pid63572 00:33:27.887 Removing: /var/run/dpdk/spdk_pid63577 00:33:27.887 Removing: /var/run/dpdk/spdk_pid63606 00:33:27.887 Removing: /var/run/dpdk/spdk_pid63634 00:33:27.887 Removing: /var/run/dpdk/spdk_pid63645 00:33:27.887 Removing: /var/run/dpdk/spdk_pid63695 00:33:27.887 Removing: /var/run/dpdk/spdk_pid63716 00:33:27.887 Removing: /var/run/dpdk/spdk_pid63774 00:33:27.887 Removing: /var/run/dpdk/spdk_pid63867 00:33:27.887 Removing: /var/run/dpdk/spdk_pid64640 00:33:27.887 Removing: /var/run/dpdk/spdk_pid66522 00:33:27.887 Removing: /var/run/dpdk/spdk_pid66817 00:33:27.887 Removing: /var/run/dpdk/spdk_pid67139 00:33:27.887 Removing: /var/run/dpdk/spdk_pid67404 00:33:27.887 Removing: /var/run/dpdk/spdk_pid68002 00:33:27.887 Removing: /var/run/dpdk/spdk_pid72449 00:33:27.887 Removing: /var/run/dpdk/spdk_pid76529 00:33:27.887 Removing: /var/run/dpdk/spdk_pid77295 00:33:27.887 Removing: /var/run/dpdk/spdk_pid77335 00:33:27.887 Removing: /var/run/dpdk/spdk_pid77656 00:33:27.887 Removing: /var/run/dpdk/spdk_pid78981 00:33:27.887 Removing: /var/run/dpdk/spdk_pid79380 00:33:27.887 Removing: /var/run/dpdk/spdk_pid79431 00:33:27.887 Removing: /var/run/dpdk/spdk_pid79838 00:33:27.887 Removing: /var/run/dpdk/spdk_pid82233 00:33:27.887 Clean 00:33:28.146 02:28:36 -- common/autotest_common.sh@1451 -- # return 0 00:33:28.146 02:28:36 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:33:28.146 02:28:36 -- common/autotest_common.sh@728 -- # xtrace_disable 00:33:28.146 02:28:36 -- common/autotest_common.sh@10 -- # set +x 00:33:28.146 02:28:36 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:33:28.146 02:28:36 -- common/autotest_common.sh@728 -- # xtrace_disable 00:33:28.146 02:28:36 -- common/autotest_common.sh@10 -- # set +x 00:33:28.146 02:28:36 -- spdk/autotest.sh@387 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:33:28.146 02:28:36 -- spdk/autotest.sh@389 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:33:28.146 02:28:36 -- spdk/autotest.sh@389 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:33:28.146 02:28:36 -- spdk/autotest.sh@391 -- # hash lcov 00:33:28.146 02:28:36 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:33:28.146 02:28:36 -- spdk/autotest.sh@393 -- # hostname 00:33:28.146 02:28:36 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /home/vagrant/spdk_repo/spdk -t fedora38-cloud-1716830599-074-updated-1705279005 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:33:28.406 geninfo: WARNING: invalid characters removed from testname! 00:33:50.339 02:28:57 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:33:51.717 02:29:00 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:33:54.256 02:29:02 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:33:56.159 02:29:04 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:33:58.693 02:29:07 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:34:01.226 02:29:09 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:34:03.757 02:29:11 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:34:03.757 02:29:11 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:34:03.757 02:29:12 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:34:03.757 02:29:12 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:03.757 02:29:12 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:03.757 02:29:12 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:03.757 02:29:12 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:03.757 02:29:12 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:03.757 02:29:12 -- paths/export.sh@5 -- $ export PATH 00:34:03.757 02:29:12 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:03.757 02:29:12 -- common/autobuild_common.sh@446 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:34:03.757 02:29:12 -- common/autobuild_common.sh@447 -- $ date +%s 00:34:03.757 02:29:12 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1721701752.XXXXXX 00:34:03.757 02:29:12 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1721701752.sM3U3Z 00:34:03.757 02:29:12 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:34:03.757 02:29:12 -- common/autobuild_common.sh@453 -- $ '[' -n '' ']' 00:34:03.757 02:29:12 -- common/autobuild_common.sh@456 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:34:03.757 02:29:12 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:34:03.757 02:29:12 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:34:03.757 02:29:12 -- common/autobuild_common.sh@463 -- $ get_config_params 00:34:03.757 02:29:12 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:34:03.757 02:29:12 -- common/autotest_common.sh@10 -- $ set +x 00:34:03.757 02:29:12 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --with-rbd --enable-ubsan --enable-asan --enable-coverage --with-ublk' 00:34:03.757 02:29:12 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:34:03.757 02:29:12 -- pm/common@17 -- $ local monitor 00:34:03.757 02:29:12 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:03.757 02:29:12 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:03.757 02:29:12 -- pm/common@25 -- $ sleep 1 00:34:03.757 02:29:12 -- pm/common@21 -- $ date +%s 00:34:03.757 02:29:12 -- pm/common@21 -- $ date +%s 00:34:03.757 02:29:12 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1721701752 00:34:03.757 02:29:12 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1721701752 00:34:03.757 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1721701752_collect-vmstat.pm.log 00:34:03.757 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1721701752_collect-cpu-load.pm.log 00:34:04.325 02:29:13 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:34:04.325 02:29:13 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:34:04.325 02:29:13 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:34:04.325 02:29:13 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:34:04.325 02:29:13 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:34:04.325 02:29:13 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:34:04.325 02:29:13 -- spdk/autopackage.sh@19 -- $ timing_finish 00:34:04.325 02:29:13 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:34:04.325 02:29:13 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:34:04.325 02:29:13 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:34:04.325 02:29:13 -- spdk/autopackage.sh@20 -- $ exit 0 00:34:04.325 02:29:13 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:34:04.325 02:29:13 -- pm/common@29 -- $ signal_monitor_resources TERM 00:34:04.325 02:29:13 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:34:04.325 02:29:13 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:04.325 02:29:13 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:34:04.325 02:29:13 -- pm/common@44 -- $ pid=129380 00:34:04.325 02:29:13 -- pm/common@50 -- $ kill -TERM 129380 00:34:04.325 02:29:13 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:04.325 02:29:13 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:34:04.584 02:29:13 -- pm/common@44 -- $ pid=129382 00:34:04.584 02:29:13 -- pm/common@50 -- $ kill -TERM 129382 00:34:04.584 + [[ -n 5102 ]] 00:34:04.584 + sudo kill 5102 00:34:04.594 [Pipeline] } 00:34:04.611 [Pipeline] // timeout 00:34:04.617 [Pipeline] } 00:34:04.633 [Pipeline] // stage 00:34:04.639 [Pipeline] } 00:34:04.657 [Pipeline] // catchError 00:34:04.666 [Pipeline] stage 00:34:04.668 [Pipeline] { (Stop VM) 00:34:04.687 [Pipeline] sh 00:34:04.975 + vagrant halt 00:34:07.506 ==> default: Halting domain... 00:34:14.082 [Pipeline] sh 00:34:14.362 + vagrant destroy -f 00:34:16.986 ==> default: Removing domain... 00:34:17.564 [Pipeline] sh 00:34:17.844 + mv output /var/jenkins/workspace/iscsi-vg-autotest/output 00:34:17.853 [Pipeline] } 00:34:17.869 [Pipeline] // stage 00:34:17.874 [Pipeline] } 00:34:17.889 [Pipeline] // dir 00:34:17.894 [Pipeline] } 00:34:17.909 [Pipeline] // wrap 00:34:17.915 [Pipeline] } 00:34:17.929 [Pipeline] // catchError 00:34:17.937 [Pipeline] stage 00:34:17.939 [Pipeline] { (Epilogue) 00:34:17.952 [Pipeline] sh 00:34:18.233 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:34:23.513 [Pipeline] catchError 00:34:23.515 [Pipeline] { 00:34:23.529 [Pipeline] sh 00:34:23.810 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:34:23.810 Artifacts sizes are good 00:34:23.819 [Pipeline] } 00:34:23.835 [Pipeline] // catchError 00:34:23.844 [Pipeline] archiveArtifacts 00:34:23.851 Archiving artifacts 00:34:24.630 [Pipeline] cleanWs 00:34:24.638 [WS-CLEANUP] Deleting project workspace... 00:34:24.638 [WS-CLEANUP] Deferred wipeout is used... 00:34:24.643 [WS-CLEANUP] done 00:34:24.644 [Pipeline] } 00:34:24.655 [Pipeline] // stage 00:34:24.658 [Pipeline] } 00:34:24.667 [Pipeline] // node 00:34:24.670 [Pipeline] End of Pipeline 00:34:24.693 Finished: SUCCESS